We are rapidly growing, looking for the best minds and spirits to join us on our journey. We know our product is only as great as the individuals building the software and hardware, harnessing data for good causes. Being a great team member means being eager to learn and grow, challenging while being challenged, and working with the team with enthusiasm and passion. Unless stated otherwise, all positions are based in Tel Aviv.
Senior DevOps Developer
Prisma Photonics develops a multi-layer and multi-technology-based product that requires continuous online integration in an elastic development environment. We are searching for a DevOps lead engineer to enhance our CI/CD for on-prem and cloud architectures. You will be responsible for our IaaS design and the ongoing provisioning and configuration of computing and storage resources in this role. You will automate the installation and deployment of our edge computing software. In addition, you will provide the tools for our operations to monitor and track statistics of our resources. We continuously grow and need to adapt and scale our infrastructure to more products and a growing span of big customers.
2+ years of experience as a DevOps developer in both on-prem and public cloud environments
Extensive knowledge of build, provisioning, and deployment services and tools such as Jenkins, Artifactory, Ansible, CloudFormation, or their equivalents
2+ Years of programming experience in high-level or scripting languages
Skilled in tuning into development processes and aiming ahead of the development requirements
Able to challenge yourself and the team members positively
As a member of our ML team at Prisma Photonics, you will tackle unique challenges in the domain of Optical Fiber Sensing. Fiber Sensing data measures a unique generated signal along an optical fiber deployed by critical utility infrastructure and interprets it as a continuous acoustic signal originating from tens of thousands of virtual sensors spanning over hundreds of kilometers. You will develop expertise with this domain and apply novel ML approaches that address the domain’s specific challenges to detect, classify, and track spatio-temporal events. In addition, you will be responsible for the training of models and the inference production code running on the edge product. As a data scientist at Prisma Photonics, you will have access to the full data-acquisition stack and work with the software engineering, hardware, data, and field application teams.
3+ years of experience and deep understanding of DL, ML, and numerical libraries: Pytorch, Tensorflow, NumPy, Pandas, xgboost/lightgbm.
Master's degree in a quantitative field such as Statistics, Computer Science, Mathematics, or equivalent practical experience.
2+ years of programming experience in complex software systems
Excellent communication skills
Highly values team collaboration
Eager to learn and adapt to new challenges
Previous publication record in ML and adjacent disciplines
Experience with seismic data and geophysics
Familiarity with MLops tools such as TensorBoard, Weights, and Biases, and ClearML
Familiarity with the DevOps and BigData infrastructure
Prisma Photonics is seeking a talented applied scientist passionate about classical digital signal processing of acoustic/seismic signals in a real-time environment.You are enthusiastic about working on challenging real-world problems and can distill requirements and innovate by leveraging existing academic and industrial research or your out-of-the-box creative thinking.As an applied scientist, you will develop detection/classification/tracking algorithms based on a DSP approach integrated with ML models.You will develop mainly in MATLAB/Python and will be required to write parallelized code in CUDA that will utilize the GPU to the maximum.You will be responsible for the entire development cycle from design-to-testing
Advanced degree in Physics/Electric engineering or related
2+ Years of experience in digital signal processing using MATLAB
Experience with a high-level programming language such as Python, C#, C++, Java
Familiar with GPU programming in CUDA
Has hands-on experience with Python packages such as
PyTorch, TensorFlow, Skikit, etc.
Additional programming languages and design patterns
As a company with unique and rich data, we build on a robust data pipeline to provide data to our machine learning systems. We invest heavily in data collection from real-world scenarios and generate data by conducting massive field experiments with infrastructure operators worldwide. In this position, you will be responsible for a scalable data acquisition, storage, and digestion architecture and deployment. In addition, you will design, build and maintain our on-prem and cloud data stores and analyze data for ML. We expect you to be knowledgeable and creative when utilizing the appropriate existing solution to design the product. You will need to become familiar with the intricacies of our data acquisition process and its physical properties. You will require experience and familiarity with common data-ops, dev-ops, and cloud services. You will be expected to make design and process decisions based on your research and gained knowledge and to be able to present them and review the team’s work with meaningful input.
Bachelor’s degree in a quantitative field such as math, computer science, engineering, etc.
4+ years of experience and deep understanding of SQL and NoSQL DBs (MySql, Elastic, MongoDB, Postgres)
2+ Experienced with Python programming
Knowledge Data Science related infrastructure, including AWS and GCP
Familiar with scripting languages including Bash, Windows cmd, Windows PowerShell
Strong collaborator with teams and peers
Innovative with a growth mindset
Additional software programming experience
Experience with Hadoop, Map Reduce, Spark, or another distributed computing platform
Cloud provisioning and administration experience and system admin experience in Windows and Linux
In this role, you will be responsible for designing, implementing, and integrating Prisma Photonics software systems running on advanced endpoint computer systems. These systems providing high throughput, connectivity, and computing services to our algorithms, require optimized software and lean runtime data processing. As providers to critical infrastructure operators of the world, you will be expected to design for the highest quality standards, extreme conditions, and cyber security built into the product. You will work in a diverse technology stack and be required to code and integrate in multiple languages and technologies, including C#, C++, Python, MATLAB, CUDA, Rust, and GO. Your job will include understanding and contributing to the many microservices involved, from driver level harnessing CPU/GPU/proprietary hardware through dataflow from storage and databases, ending with complex algorithm optimization. As a senior member of the team, you will be asked to mentor and review the team’s output in all different development stages and to foster collaboration between teams.
Bachelor’s or advanced degree in Computer Science, Mathematics, or related field.
4+ years of experience in Python.
4+ experience in C++, or C#
Excellent understanding and knowledge of software design patterns, computer architecture, and multithreading
Familiar with microservices architecture and Docker
Strong collaborator and team player
Persistent problem solver
Record of excellence
Knowledge in ML and its libraries
Experience with OOD, middleware, and databases
Experience in direct customer support and failure analysis
Experience with low-level system programming, drivers, or firmware
We are looking for an experienced Field Application Engineer (FAE) to provide our customers with timely and direct technical support. The FAE would be part of the System and Integration group and directly involved in features development from the design to integration.The preferred candidate will have a solid technical background and show a good overall understanding of system-level challenges. In this role, you will:
Participating in the project life cycle, including definition, hardware design, time to market, validation, technical support, and product training.
Planning and leading PoC and Pilot activities with customers.
Master the product features and specifications, thorough understanding of customers’ requirements and target solutions, and prepare technical responses to RFIs and RFPs
Demonstrate the product as a technical expert in customer meetings, international trade shows, and industry events (e.g., the audience of C-Level and technical experts)
Strong collaboration with R&D groups, dealing with software and hardware teams.
Organize and backup system data gathered from activities with customers.
Troubleshooting system stability and performance on customer's requests, resolving engineering issues, and conducting regular follow-up on customer's activities.
Be the customer’s technical channel to enable R&D to fix customer issues and improve product quality.
Write product documentation, e.g., Application Notes, Evaluation Plan Proposals, activity reports, etc.
Technical management of accounts.
B.Sc. in Electrical Engineering/ Computer Science or higher degree from an accredited university or college
7+ years of proven hands-on experience as FAE in the field
Experience in project management
Experience in multidisciplinary systems engineering
Experience with data analysis programs and programming platforms
Broad understanding of laser technology, communication systems, network management
Self-motivated with analytical skills and a problem-solving orientation
Excellent communication skills (verbal and written) in English and Hebrew
Independent, Self-motivated, Dedicated
Willingness and ability to travel globally and extensively (50%+)
Willingness to work in multiple time-zones – supporting customers abroad