Latest jobs

The position
You'll be joining a team of engineers across Frontend, Backend, SRE and QA. We're organised into cross-functional development teams assigned to specific verticals. This role is open for several teams, and we will define the exact team that you will be joining during the interview process based on the business needs and your preferences. Regardless of the specific team, you will be working on building tools, APIs and integrations for one of our products.
Tech stack
Our backend is built with Elixir and Phoenix, with a Postgres database. We use React and Nextjs for our front-end. Gitlab is used as a version control tool, issue tracker and a CI/CD solution. Our applications are hosted on AWS. We fully rely on our CI for deployments and deploy multiple times per day.
Key Responsibilities
- Lead the development of major team-scoped projects, participate in cross-team initiatives for Remote's HR and Payroll products.
- Actively participate in product work in the team: provide feedback, suggest solutions to the problems. Use technical insights and expertise to suggest product improvements.
- Maintain good understanding of the teamโs domain, both from product and engineering sides.
- Provide feedback on code reviews.
- Contribute to the shared codebase.
- Debug and solve technical and business issues.
- Participate in non-team activities, such as support rotations, hiring process, RFC discussions, etc.
- Mentor and provide guidance to other engineers.
- Investigate, propose and participate in implementation of improvements to our platform.
- Implement interfaces with performance, accessibility, and API design in mind.
- Redesign how engineering work ships with autonomous agents as the default execution layer
- Propose and operationalize agentic workflows end-to-end (spec โ plan โ execute โ verify) to deliver outcomes faster.
- Build reusable agentic workflows and primitives in the codebase so teams can apply them repeatedly across domains.
- Use verification loops (tests, checks, evals, guardrails) to ensure results are correct, secure, reliable, and scalable.
What you bring
Must have
- Strong engineering fundamentals and a track record of shipping production systems that are secure, reliable, and scalable.
- Practical experience designing or adopting agentic/automation workflows (or comparable systems) and improving them through iteration.
- Ability to think in systems: define specs clearly, break down plans, instrument verification, and close the loop on quality.
- Postgres (or similar).
- CI/CD (GitLab, Github, Jenkins or similar).
Nice to have
- Kubernetes
- Docker
- AWS
- Nextjs
- React/Vue/Angular
Benefits & perks
- work from anywhere
- flexible paid time off
- flexible working hours (async)
- 16 weeks paid parental leave
- mental health support services
- stock options
- learning budget
- home office budget & IT equipment
- budget for local in-person social events or co-working spaces

Who You Are
You are an experienced Python developer who will deliver Adapty ะะกะ to streamline our SDK integrations and domain entities management (products, prices, paywalls, experiments), ensure our analytics services are predictable, stable, and covered with autotests, and our new APIs are released in time.
What Youโll Be Doing
- Leading a distributed team as a hands-on leader, with an expected 50/50 split between coding and management, setting a high bar for delivery, task decomposition, and estimation.
- Shaping our OLAP analytics solution (ClickHouse), delivering industry-leading mobile app performance metrics used by clients to make critical business decisions.
- Developing revenue growth tools, including A/B testing, onboarding, paywalls, and placements that drive rapid mobile app growth.
- Building a next-gen Flow Builder โ a WYSIWYG tool for app onboarding and paywalls that dramatically speeds up idea validation for our clients.
- Putting yourself into our customersโ shoes to balance user impact with long-term architectural goals.
Tech Stack
- Python
- PostgreSQL
- ClickHouse
- Kafka
Team Description
- Leading a distributed team as a hands-on leader, with an expected 50/50 split between coding and management.
- You'll work alongside talented peers who value ownership, knowledge sharing, and building products that truly make an impact.
What Youโll Need
- 8+ years of experience building backend systems using Python, PostgreSQL, ClickHouse, and Kafka.
- Strong ability to translate product requirements into clear technical designs, with hands-on experience in unit, integration, and end-to-end testing.
- Critical thinking and a system-level mindset: balancing short-term goals with long-term vision, identifying root causes, and making thoughtful, collaborative decisions.
- Solid understanding of b2b SaaS business model, customer requirements and how we can surpass those.
Whatโs in It for You
- A strong product with industry-leading metrics. Adapty is among the top 5% of the fastest-growing SaaS companies.
- Career growth and competitive compensation. Build your team, own critical product areas, and grow with us.
- Direct communication and ownership. No bureaucracy, no politics โ just impact.
- Flexible remote work. Join us from anywhere, with a schedule that fits your life.
- Additional benefits. English lessons, sports reimbursements, laptop coverage, and more.

What you'll do
- Design, build, and maintain the critical infrastructure and services that underpin the entire product, ensuring performance, stability, and reliability across the globe.
- Help design, build, and operate the highly scalable, resilient, and globally distributed platform that handles billions of requests daily with exceptional uptime.
Who you are
- Have a strong foundation in computer science, including algorithms, data structures, concurrency, and systems design.
- Have experience building and scaling high-traffic, high-availability backend systems.
- Have experience designing and operating backend infrastructure, including deployment automation, capacity planning, and observability.
- Are comfortable working across different layers of the stack, from performance-critical code to production operations.
- Enjoy learning new technologies and tools in a fast-moving, large-scale environment.
- Collaborate effectively and contribute to a positive, ownership-driven team culture.
Our Tech Stack
- Languages: Rust, C++, Python
- Cloud Infrastructure: AWS
- CI/CD: Jenkins, GitHub Actions
- Monitoring & Observability: Prometheus, Grafana
Team
The Core Team sits at the heart of Constructor, shaping Constructorโs in-house search engine, where user requests meet customer catalogs and Constructorโs learnings. We build and operate the foundational systems that power every request flowing through the platform, enabling fast, reliable, and intelligent interactions at massive scale. Our systems handle billions of requests daily with exceptional uptime, and we take pride in delivering a highly scalable, resilient, and globally distributed platform. As part of the Core team, you will help design, build, and maintain the critical infrastructure and services that underpin the entire product, ensuring performance, stability, and reliability across the globe.
Benefits
- ๐๏ธ Unlimited vacation time - we strongly encourage all of our employees take at least 3 weeks per year
- ๐ Fully remote team - choose where you live
- ๐๏ธ Work from home stipend! We want you to have the resources you need to set up your home office
- ๐ป Apple laptops provided for new employees
- ๐งโ๐ Training and development budget for every employee, refreshed each year
- ๐ช Maternity & Paternity leave for qualified employees
- ๐ง Work with smart people who will help you grow and make a meaningful impact
- ๐ต Base salary: $80kโ$120k USD, depending on knowledge, skills, experience, and interview results
- ๐ Stock options - offered in addition to the base salary
- ๐ Regular team offsites to connect and collaborate

What you'll do
We are looking for a CI/CD Engineer to join our team, who will assess current processes, suggest improvements, and collaborate across infrastructure and application code to help maintain smooth, stable releases.
- Design, debug, and continuously improve CI/CD pipelines for speed, reliability, and maintainability;
- Diagnose and resolve build failures, flaky tests, and deployment issues;
- Automate secret management and rotation across environments;
- Integrate pipelines with external systems (GitLab, cloud providers, Kubernetes, third-party APIs);
- Develop and maintain automation tooling, primarily in Python;
- Dive into existing Python codebases to understand current CI/CD workflows and fix issues at the source;
- Contribute to product code when necessary to unblock releases or improve delivery processes;
- Monitor pipeline performance and systematically eliminate bottlenecks;
- Collaborate closely with developers and infrastructure engineers to refine build, test, and deployment workflows.
Who you are
- Strong hands-on experience with CI/CD platforms, preferably GitLab CI; experience with other tools is also a plus;
- Excellent Python skills and ability to work effectively in large, unfamiliar codebases;
- Proven ability to debug complex pipeline and deployment issues end-to-end;
- Experience with secret management systems (e.g., HashiCorp Vault, AWS Secrets Manager);
- Solid understanding of REST and/or GraphQL APIs;
- Experience with containerization and orchestration (Docker, Kubernetes);
- Familiarity with at least one major cloud provider (AWS, GCP, or Azure).
NICE TO HAVE
- Infrastructure as Code experience (Terraform, Ansible, Helm);
- Shell scripting skills;
- Experience with monitoring and observability tools (Prometheus, Grafana, ELK, Loki);
- Experience with asynchronous Python;
- Background in build and deployment optimization.
Tech stack
- CI/CD: GitLab CI (preferred);
- Programming: Python;
- Containerization/orchestration: Docker, Kubernetes;
- Secret management: HashiCorp Vault, AWS Secrets Manager;
- Cloud: AWS, GCP, or Azure;
- Infrastructure as Code: Terraform, Ansible, Helm;
- Monitoring/observability: Prometheus, Grafana, ELK, Loki;
- APIs: REST, GraphQL;
- Other: experience with external systems integration and scripting as needed.
Benefits
- Comprehensive health insurance with coverage for your well-being
- Paid sick leave up to 10 days without medical certificate
- 20 days of paid vacation plus additional leave for important life events
- Learning and growth opportunities with support for professional development
- Language learning support for multilingual collaboration
- Modern hardware provided for your work
- International team environment across multiple countries
- Corporate events and team activities
- Welfare support program for critical situations
- Gifts and support for major life milestones
Team
Remote-friendly with offices in Latvia, Malta, Spain, and Montenegro; vibrant work culture with relocation opportunities and full support for remote talent across the globe.
Responsibilities
- Perform preliminary, receiving, inโprocess, and final inspections of aircraft components in accordance with ZeroAvia (ZA) design data, industry standards, customer requirements, and regulatory requirements
- Conduct incoming inspections of purchased parts, materials, and aircraft components to verify condition, finish, dimensions, configuration, and compliance with inspection plans
- Interpret engineering drawings and approved design data, including GD&T
- Use a wide range of measuring and inspection tools (e.g. calipers, micrometers, multimeters, radius gauges) and support CMM inspection activities
- Perform and document First Article Inspections (FAI) in accordance with AS9102
- Develop, maintain, and help lead quality inspection processes including incoming inspection, inโprocess inspection, FAI, characteristics verification, and product conformity
- Verify and review supplierโsubmitted documentation and certification data
- Document inspection results through inspection plans, reports, and proprietary quality systems
- Raise, document, and support resolution of nonโconformities in accordance with ZA procedures
- Maintain accurate receiving and inspection records
- Support internal and external (supplier and regulatory) audits
- Conduct supplier source inspections as required, including occasional international travel
Qualifications and Expertise Required
- Minimum 3 yearsโ experience in aerospace final inspection within a Production Certificate Organisation
- Minimum 8 yearsโ experience in final inspection of mechanical, electrical, and electronic components
- Strong working knowledge of AS9100 and AS9102
- Proven experience performing and recording First Article Inspections (FAI)
- Ability to read, interpret, and apply engineering drawings and GD&T
- Handsโon experience using inspection and test equipment (digital multimeter, calipers, micrometers, radius gauges, etc.)
- Knowledge of and experience with special processes and NADCAP requirements
- Experience using ERP and quality systems (e.g. SAP, Oracle, NetSuite, Siemens, TipQA) and Microsoft Office tools
- High School Diploma, Associate Degree, or Bachelorโs Degree
- Strong written and verbal communication skills in English
- Ability to work independently and as part of a small, fastโpaced team
Desirable
- Experience in electrical and electronics manufacturing and inspection
- Familiarity with IPCโAโ610, IPCโAโ620, and JโSTDโ001
- Familiarity with P21J environments
- Additional language skills
- Willingness and ability to travel internationally on occasion
Tech Stack and Tools
- ERP and quality systems (e.g. SAP, Oracle, NetSuite, Siemens, TipQA)
- Microsoft Office tools
- Inspection and test equipment (e.g. digital multimeters, calipers, micrometers, radius gauges)
Team and Collaboration
Work closely with engineering, manufacturing, supply chain, and suppliers to ensure product conformity throughout the production lifecycle. The role sits within ZAโs Quality team and supports audits and supplier evaluations, contributing to a fastโmoving, innovative environment.
Benefits and Perks
- Private health and dental care
- Mental health support
- Free lunch and healthy snacks
- Sports, games and culture clubs
- Stock options
- 25 days holiday, plus public holidays
- Free EV Charging and EV Club membership
- Salary Sacrifice Schemes for EV Club, Curry's Tech, Cycle to Work, and Ikea Furniture
- Weekly Spot Bonuses
- Income Protection and Legal Support
- Relocation Support

What you'll do
- Build, operate, and improve ETL/ELT pipelines, Spark workloads, and data warehouse components.
- Develop tools and automations to simplify and harden data pipeline workflows and general operations.
- Design, implement, and maintain scalable, highly available cloud infrastructure and services with a focus on automation and reliability.
- Develop and operate observability tooling for monitoring, logging, tracing, and data-pipeline metrics (freshness, completeness, latency, error rates).
- Collaborate with development teams to instrument, deploy, and troubleshoot production systems across microservices on Kubernetes.
- Operate, deploy, and monitor data infrastructure and cloud services from development to production.
- Own availability, scalability, and performance of systems, focusing on data pipelines and warehousing components.
- Partner with peer SREs to roll out production changes and mitigate data-related and infrastructure incidents.
- Troubleshoot issues across data pipelines and production systems; support capacity planning and analyze system and data workflow performance.
- Provide data engineering expertise to engineering teams and work cross-functionally with developers and analysts on designing, releasing, and troubleshooting production systems.
- Own team projects and ensure timely delivery.
Who you are
As a Site Reliability Data Engineer based in Athens, you will play a critical role in ensuring the reliability, scalability, and performance of our data infrastructure and pipelines. You will collaborate closely with engineering teams to build and operate robust cloud-based systems, driving automation and observability across our platform.
Requirements
- BS/MS degree in Computer Science, Engineering, or equivalent practical experience
- 2+ years of experience in site reliability engineering, data engineering, or a closely related role, including programming
- Experience with a major cloud provider (AWS or GCP)
- Hands-on experience with infrastructure-as-code or configuration management tools (Terraform or Ansible)
- Experience with ETL/ELT concepts and tools (Airflow or dbt)
- Experience with Apache Spark or similar distributed data processing frameworks
- Experience with cloud data warehouses (BigQuery, Redshift, or Snowflake)
- Proficiency in at least one programming language (Python, Go, or Scala)
- Excellent written English proficiency
- Legally authorized to work in Greece
Preferred Qualifications
- Production experience with Kubernetes
- Experience with centralized monitoring and logging systems
- Experience with streaming systems (Kafka or Spark Streaming)
Tech Stack
- AWS or GCP
- Terraform or Ansible
- Airflow or dbt
- Apache Spark
- BigQuery, Redshift, or Snowflake
- Python, Go, or Scala
- Kubernetes
- Monitoring and logging systems
- Kafka or Spark Streaming
Benefits & Perks
- Comprehensive Health Coverage: A robust health insurance plan that includes coverage for your dependents.
- Competitive Compensation: An attractive salary paired with a performance-based bonus plan.
- Flexible Work Model: Hybrid setupโtwo days working from home and three in the office.
- Top-Tier Tools: Apple gear and access to the latest productivity tools to help you excel.
- Stay Connected: A mobile data plan to keep you online wherever you are.
- Delicious Perks: Fresh, tasty food at the office to fuel your productivity.
- Relocation Bonus: Help you settle in smoothly in Athens.
Team Description
You will collaborate closely with engineering teams to build and operate robust cloud-based systems, driving automation and observability across our platform.

What you'll do
- Engage with scientists, lab managers, and operations teams to capture workflows (sample accessioning, tracking, histology, imaging, metadata, reporting)
- Analyze current-state processes and support harmonization into standardized LIMS workflows
- Document functional requirements, workflow mappings, data structures, and integration needs
- Facilitate workshops, stakeholder interviews, and cross-functional discussions
- Collaborate with Product Managers, Solution Architects, and development teams to validate solutions
- Create user stories, acceptance criteria, test scenarios, and support UAT activities
- Identify risks, dependencies, and propose mitigation strategies
- Prepare training materials, SOP inputs, and support solution rollout
Who you are
- Strong hands-on experience with Sapio LIMS
- Solid expertise in the Histopathology domain
- Experience working in laboratory environments and scientific workflows
- Experience in global, multi-site projects with budgets exceeding USD 7M
- Strong knowledge of sample lifecycle management and scientific data/metadata
- Experience in requirements engineering (BRDs, user stories, RTMs, process mapping)
- Strong stakeholder management and communication skills
Nice to have
- Change management experience
Team
Quantori is an international team: we have colleagues who work not only from office but also remotely from all over the world.
We offer
- Competitive compensation
- Remote or office work
- Flexible working hours
- A team with excellent tech expertise

What you'll do
- Apply a rigorous scientific method, advanced mathematical and statistical methods to develop sophisticated trading model
- Perform alpha research, execution research and models performance optimization as part of Quant team
- Develop, augment and calibrate back test simulations for trading teams to use
- Develop quantitative tools and libraries to aid the strategy development process
- Build and continuously improve upon mathematical models, translate algorithms into code (ะก++) and implement new trading models and signals in live trading environment (Linux)
- Stay up to date with state-of-the-art technologies and tools including technical libraries, computing environments and academic research
Who you are
- Advanced Degree from top-tier university (i.e. Mathematics, Physics, Computer Science, etc.) as well as similar experience in the field of Quantitative Finance
- Excellent knowledge of probability and statistics, including experimental design, predictive modeling, optimization and inference
- Solid understanding of fundamentals, workflows and how to deal with large data sets for analytical approach and quantitative methods
- Strong analytical skills and experience with distributed computing and translating mathematical models and algorithms into code (C++, Python, Linux, MapReduce)
- Highly proficient in Math, algorithms and personal achievements in any quantitative field or competition (Codeforces, olympiads, academic competitions, etc.)
- Strong interpersonal skills, ability to manage multiple tasks and thrive in a fast-paced team environment
Tech stack
- C++
- Python
- Linux
- MapReduce
- Technical libraries, computing environments and academic research
What we offer
- Good salary and great bonuses which depend on your results
- Impact the business/p&l directly
- Opportunity to collaborate directly with the founders. Work for a fast growing company with low staff turnover
- Become part of a team of the most talented developers, traders, and quants from top-tech universities (Olympiad winners)
Team description
Become part of a team of the most talented developers, traders, and quants from top-tech universities (Olympiad winners)

Team description
Weโre a diverse team of 350+ people spread across three continents building the leading Chat Marketing platform that is used โ and loved โ by more than 1.5 million customers worldwide.
What you'll do
- Live Chat: launching a specific area, we are committed to improving our mobile application and Live Chat functionality, enabling real-time and efficient customer support, and empowering businesses to deliver exceptional experiences.
- WhatsApp: with a core emphasis on stability, we aim to enhance the functionality of our WhatsApp channel, ensuring seamless messaging experiences, and expanding capabilities for businesses and users alike.
- Instagram: our focus is to enhance the product metrics of our Instagram channel, driving growth and engagement through strategic optimizations and user-centered improvements.
- Flow Builder: our goal is to elevate user experiences by improving the Flow Builder โ our visual editor, streamlining the creation of dynamic chatbot flows, and enhancing the internal processing to enable smoother, more efficient interactions.
- Developer/Professional Tools: we are actively developing tools and resources for developers who utilize Manychat, fostering a thriving ecosystem. By providing comprehensive support and enhancing our DevProgram, we empower developers to create innovative solutions and integrations with our platform.
What you'll bring
Must haves
- 5+ years experience working in a product team as a PHP developer (we use PHP 8.5).
- Proficiency in English, and ideally experience working with global teams.
- Ability to use relational databases (we use PostgreSQL).
- Experience in writing testable code and test cases.
- Ability and desire to work in a product team.
- Capability to take ownership and lead long-term projects.
- Adaptivity to change and comfort in a fast-paced environment.
- Excellent communication skills, strong ability to collaborate, and proactive approach.
- Problem-solving mindset.
Nice to haves
- Experience working with loaded projects and queue systems.
- Skills working with infrastructure.
- Experience working with third-party API.
- Knowledge of different NoSQL solutions and analytical systems.
- Replication, partitioning, sharding, PL/pgSQL, and other attributes of profound work with databases.
Tech stack
- PHP 8.5
- PostgreSQL
- NoSQL solutions and analytical systems
- Replication, partitioning, sharding, PL/pgSQL
What we offer
- Hybrid onboarding to start work remotely and relocation support for you and your family.
- Comprehensive health insurance for both you and your family.
- Professional development budget for conference tickets, online courses, and other relevant resources to help you grow.
- Flexible benefits package to tailor perks that matters most for you.
- Hybrid work and generous leave options to prioritize your work-life balance.
- In-office perks, including free meals and snacks.
- Company-funded sport activities, annual offsites and team-building events.

Project description
The Carbon Capture Platform is designed to support the verification and management of COโ sequestration data from Carbon Capture and Storage operations. The system processes operational monitoring data, generates verifiable records, and enables the creation of digital tokens representing stored CO2.
At the beginning of the engagement, the team will analyze the existing platform, including reviewing the architecture, evaluating the current codebase, and identifying technical issues, gaps, and potential risks.
Based on this assessment, the team will work on resolving identified problems, improving system stability and scalability, and implementing required enhancements. The team will also continue development of new features and support onboarding of new customers, including integration with external data sources such as SCADA systems.
The overall goal of the engagement is to ensure reliable operation of the platform, support its further development, and enable efficient integration of new users and data sources.
Responsibilities
- Analyze the existing React-based user interface and Node.js backend to identify performance, architectural, or integration issues
- Refactor and improve UI components and backend services to support updated requirements and ensure system scalability
- Implement new end-to-end features and enhancements requested by the customer
- Maintain and extend both client-facing interfaces and server-side logic used by administrators, customers, and auditors
- Design and develop backend REST APIs, integrating them seamlessly with frontend components and blockchain services
- Support end-to-end troubleshooting of system and UI issues reported by users
- Participate in validation of workflows related to customer onboarding, reporting, and token management
Skills
Must have
- 5+ years of full-stack development experience
- Strong experience with Node.js, React, and modern JavaScript/TypeScript ecosystems
- Experience building robust, data-driven web applications from the database up to the UI
- Experience designing, building, and integrating REST APIs
Nice to have
- Experience building dashboards or operational interfaces
- Experience working with blockchain wallets or Web3 libraries
- Familiarity with enterprise UI frameworks
- Experience working with complex data visualization interfaces
- Familiarity with enterprise system architecture and AWS cloud deployment
Other
Languages
- English: B2 Upper Intermediate
Seniority
- Senior
Tech stack
- Node.js
- React
- JavaScript/TypeScript
- REST APIs
- Blockchain services (integration)
- Web3 (nice-to-have)
- AWS cloud deployment (mentioned as "AWS cloud deployment" in Nice to have)
Team
Cross Industry Solutions
Location
Gdansk, Poland

WHAT YOUโLL BE DOING
- Implementation of efficient trading algorithms, finding a balance between performance of the solution and ease of maintenance
- Close communication with the Quantitative Research team regarding technical tasks
- Write a lot of asynchronous, template, network and thread-safe code
WHAT WE LOOK FOR IN YOU
- Strong knowledge of data structures, algorithms, and a competitive programmIng background
- Experience with C or C++
- Understanding of Linux system internals and networking
- Decent level of written and spoken English to work in an international environment
NICE-TO-HAVE
- In-depth knowledge and expertise with low latency/real-time development with sub-microsecond latency
- LockFree containers and thinking pattern
- Knowledge of CUDA
WHY SHOULD YOU JOIN OUR TEAM?
- Great challenges with fast feedback loops
- A welcoming group of highly qualified international professionals
- Cutting-edge hardware and technology
- Work remotely from anywhere in the world
- Access any of our global offices anytime
- Flexible schedule
- 40 paid days off
- Competitive salary
TECH STACK
- C or C++
- Linux system internals and networking
- CUDA
- Low-latency / real-time development
- LockFree containers
TEAM DESCRIPTION
A welcoming group of highly qualified international professionals.

The Network Infrastructure Engineer will support both project-based work and the ongoing maintenance of customers' existing network environments, including M&A activities. This role requires hands-on experience with Cisco network switches, firewalls (Fortinet, Palo Alto), SD-WAN, and Wi-Fi 802.1x. The Network Engineer will work closely with senior engineers and the Project Management Office (PMO) to support network projects, gather requirements, help scope project engagements, and ensure the security and performance of client networks.
What you'll do
Support the design, deployment, and maintenance of customer networks, ensuring security and performance across projects and ongoing operations.
Job Duties & Responsibilities
- Assist in the design and deployment of network solutions for project-based work, including Cisco switches, Fortinet and Palo Alto firewalls, and SD-WAN.
- Gather requirements and assist in scoping project engagements, collaborating with senior engineers and the PMO.
- Provide support for customers' existing network environments, including troubleshooting and maintenance.
- Participate in M&A activities by assisting in the integration of networks, systems, and applications.
- Implement network security measures, including firewall policies, VPNs, and Wi-Fi 802.1x.
- Collaborate with senior engineers and the PMO to execute network projects and implement network changes.
- Assist in cloud networking tasks in Azure and AWS, including connectivity and security configurations.
- Maintain network documentation, including configurations and topology diagrams.
- Stay updated with industry trends and provide support and recommendations to clients.
Qualifications
- 3-5 years of experience in network support, configuration, and troubleshooting.
- Experience working as a consultant, supporting clients with technical network solutions.
- Hands-on experience with Cisco network switches, Fortinet and Palo Alto firewalls, and SD-WAN technologies.
- Basic understanding of cloud networking in Azure and AWS.
- Experience with M&A network integration support.
- Strong problem-solving skills and ability to manage both project-based work and ongoing support tasks.
- Certifications such as CCNA, CompTIA Network+, or equivalent experience.
- Strong communication and organizational skills.
Tech stack
- Cisco network switches
- Fortinet and Palo Alto firewalls
- SD-WAN
- Azure and AWS cloud networking (connectivity and security)
- VPNs
- Wi-Fi 802.1x
Team description
The Network Infrastructure Engineer will work closely with senior engineers and the Project Management Office (PMO) to support network projects, gather requirements, help scope project engagements, and ensure the security and performance of client networks. This role is part of Conversant Group-Athena7 and may involve collaboration on M&A activity.
Benefits & perks
- Internal and external learning & development opportunities, including career advancement
- Scheduled & flexible PTO programs
- Family friendly programs - Care packages
- Regular team building events
- Competitive compensation & benefits including:
- Private health insurance
- Mental health and wellness programmes
- Company-matched pension scheme
- Life insurance and income protection insurance
- Monthly fitness/gym membership allowance

What You Might Work On
- Designing, building, and improving core product features that help teams understand customers, prioritize work, and communicate roadmaps
- Working in the area that best fits your strengthsโfrontend, backend, or fullstackโwhile collaborating closely across disciplines
- Owning problems end to end: from discovery and technical design to implementation, rollout, and iteration
- Partnering daily with product managers, designers, and other engineers to break down complex problems into pragmatic, incremental solutions
Who You Are
Below is an overview of the kind of work we do and the general skills we look for in Product Engineers.
Our Tech Stack
- Frontend: TypeScript, React.js, GraphQL
- Backend: Kotlin/JVM, Ruby, Kafka
- Storage: Postgres, Elastic, Redis
- Data Pipeline: Python, Keboola, Looker, Snowflake
- Infrastructure: AWS, Kubernetes, Terraform
- Business tools: Slack, Jira, Google suite, Zoom, Notion
Team Description
Youโll join Productboardโs Product Engineering organization, where we build a platform that helps product teams around the world create products that matter. Our teams work across the stackโfrom intuitive frontend experiences to scalable backend services and AI-powered workflowsโdelivering reliable, high-impact features used by thousands of customers.
Youโll be part of a cross-functional product team made up of engineers, a product manager, and a designer. We value ownership, autonomy, and thoughtful trade-offs between speed and quality. Depending on your interests and strengths, you may work primarily on frontend, backend, or fullstack problems, with opportunities to influence both product and technical direction.
We work in a hybrid setup from our Prague and Brno offices, and we support relocation for candidates moving to Prague.
Benefits & Perks
- Stock options
- MacBook + 34โณ monitor
- Budget for online courses, books, and conferences
- 5 weeks of vacation + 9 sick days
- Volunteer Days for you to help causes close to your heart
- Carrot
- Fertility Benefits
- Free snacks, drinks, and yummy catered lunches
- MultiSport card to access sports facilities
- Flexible working hours and home office
- Parental benefits
- Language lessons
- Mental Wellness Program
Relocation Opportunities
If joining us means making a move, weโre here to help make that transition easier.
Candidates must have the legal right to work in the EU. While we are unable to provide visa sponsorship for this role, weโre happy to support relocation to Prague for candidates already authorized to work in the EU.
Relocation Support
We offer a one-time relocation bonus ranging from $6,000 to $13,000 USD, depending on your personal situation, whether youโre moving on your own or with a partner or family.
This bonus is intended to help offset moving expenses and support your transition into your new city. If youโre thinking about relocating and want to explore what this could look like for you, weโd be happy to have that conversation.
Team description
Zalando Partner Tech is the technology organisation within Zalando responsible for sourcing fashion products and consumer goods into Zalando. Through our platform and tools, we provide a wide range of products to customers all over Europe. The organisation is composed of more than 300 talented individuals who are dedicated to creating the best platform and inspiring opportunities for multi million euro fashion brands and physical retail stores, connecting them with consumers across multiple European markets.
As a Data Engineer within Partner Tech, you will get the opportunity to shape the ways we build, extend and improve our data platform. You will work in a welcoming and inclusive team of data engineers. You will be responsible for the identification, collection and transformation of data, consumed by Zalando Partners and internal users. You will communicate frequently with your teamโs stakeholders to understand and react to their needs in order to raise the data capabilities of all of Partner Tech.
We actively leverage Databricks and Airflow, together with Python and Spark on top of AWS infrastructure to build our pipelines and Anomalo for our data quality process.
What you'll do
- Be in the driver's seat. You enjoy being proactive, addressing intricate business problems and taking the initiative to independently explore and understand them.
- Share your ideas about ways to enhance existing tooling and processes.
- Support the implementation of a reliable and performant data infrastructure that integrates with internal and 3rd party tools.
- Collaborate with diverse teams of talented data analysts, applied scientists, business analysts, product managers, and software engineers to implement and maintain high-performance data-processing and data integration systems on top of lake house architecture.
- Work with senior data engineers, and peers in central data platform teams to develop the system architecture and ensure seamless integration into Zalando's central data platform
- Establish and enforce data quality standards throughout the ETL process.
- Set up mechanisms to proactively identify and address potential issues before they impact data integrity.
We'd love to meet you if
- You have solid programming skills in Python.
- Several years of proficiency in engineering best practices, designing, implementing scalable data pipelines from data lake.
- Experience of running data processing pipelines using distributed data processing frameworks.
- Hands-on experience with a cloud ecosystem (AWS, GCP or Azure) and integration with orchestration services (like Airflow).
- Experience in working with MPP data warehouse, e.g. Redshift, Big Query or Snowflake.
- You have excellent verbal and written communication skills in English with an ability to articulate complex technical topics and solutions concisely and simply.
- Nice to have: Experience with Databricks and Apache Spark. Previous operational excellence contributions and support rotation experience (24/7, incident management etc) would be a definite plus.
If you have what it takes, we encourage you to apply even if you don't meet every requirement. You may be the right candidate for this or other roles!
Our offer
- Culture of trust, empowerment and constructive feedback with internal guilds and Employee Resource Groups, knowledge sharing through tech talks and open source commitment, internal tech academy and blogs, product demos, meetups, parties and events.
- Competitive salary and employee shares program.
- 40% Zalando discount of products sold and shipped by Zalando, 30% off Zalando Lounge and additional discounts from external partners
- Monthly transport, lunch and recreation vouchers, private health insurance and occupational health care services, family services, free beverages and snacks, diverse sports and health offerings
- Centrally located office in Kamppi, Helsinki, with flexible working times, additional holidays and hybrid working model
- Extensive onboarding, mentoring and training opportunities and an international team of experts to work with
- Relocation assistance for internationals
Tech stack
- Databricks
- Airflow
- Python
- Spark
- AWS
- Anomalo for data quality
Team description
BrainRocket is a software development company and digital solutions provider. The company has created over 65 cutting-edge products spanning 20 different markets.
Our team of around 700 tech-savvy professionals successfully deliver scalable projects that are custom-made to the customersโ needs.
We also strive to create a culture centred around personal and professional growth for employees, in a positive and welcoming environment.
Responsibilities
- Identify bottlenecks in infrastructure and optimize system performance
- Design fault-tolerant systems and implement disaster recovery strategies.
- Enforce security standards for applications and infrastructure.
- Review substantial changes and design documents and propose better solutions and best practices for faster and scalable applications
- Optimize cloud and on-premise resources for cost-efficiency and scalability.
- Help implement long-term roadmap of organisationโs IT infrastructure
- Support automation testing, deployment, and monitoring processes to accelerate software delivery
- Evaluate and recommend architectural improvements, such as transitioning to microservices and micro-frontend architecture
Requirements
- Bachelors, Masters or higher degree in Computer Engineering, Computer Science, Applied Mathematics or any relevant field.
- 10+ years experience in Software Engineering and/or DevOps
- 5+ years hands-on experience in cloud technologies (ideally AWS)
- Extensive hands-on experience with K8s and containers, infrastructure and system design, SQL and NoSQL databases, storages, caching, logging.
- Proven work on scalable, fast, resilient system designs
Tech stack
- Kubernetes and containers
- Cloud technologies (ideally AWS)
- Infrastructure and system design
- SQL and NoSQL databases, storages, caching, logging
Benefits and perks
- Learning and development opportunities and interesting challenging tasks
- Official employment in accordance with the laws of Cyprus and the EU, registration of family members
- Relocation package (tickets, staying in a hotel for 2 weeks)
- Company fitness corner in the office for employees
- Opportunity to develop language skills and partial compensation for the cost of language classes
- Birthday celebration present
- Time for proper rest and 24 working days of Annual Vacation
- Breakfasts and lunches in the office (partially paid by the company)
Job Description
San Francisco based AI for meetings startup is looking for experienced backend engineers. Weโre developing a next generation real-time video communications platform, working at the cutting edge of real-time communications and human centered artificial intelligence. If youโre interested in making a large impact and are comfortable working in a fast paced, self-supervised environment, come join us!
As a senior AI Engineer, you will be working in a fast-paced environment which needs a mindset of a startup and an entrepreneur that is not hesitant to constantly shift gears, test and learn.
Responsibilities
- Research, design and implementation of machine learning/deep learning algorithms to support a real-time NLP data analytics platform for summarization, sentiment analysis, and topic discovery.
- Benchmarking and fine tuning of machine learning/deep learning algorithms
- Optimizing algorithms for real time in the cloud.
- Support algorithm integration into Headroom product.
- Stay up to date with tech, prototype with and learn new technologies, proactive in technology communities.
- Deliver on time with a high bar on quality of research, innovation and engineering
- Develop & maintain NLP Pipeline for Document Data Extraction semantics and sentiment processing and understanding.
- Create products that provide a great user experience along with high performance, security, quality, and stability.
Qualifications
- MS or Ph.D in Computer Science, Electrical Engineering or related field with focus on machine learning, computer vision, speech processing, natural language understanding, human machine interaction, or similar.
- 3+ years of work experience in designing and developing enterprise-scale Large Language Model solutions in one or more of: Named Entity Recognition, Document Classification, Document Summarization, Topic Modeling, Dialog Systems, Sentiment Analysis.
- 3+ years of work experience with transformer based language models
- Knowledge of GPT-X, HuggingFace Transformers
- Experience with fine-tuning and model optimizations (for inference speed / cost)
- Experience with Vision Based Transformers
- Knowledge of MLOps stacks such as GCP, Docker, REST APIs, scaling
- Experience in setting up supervised & unsupervised learning cloud NLP models including data cleaning, data analytics, feature creation, model selection & ensemble methods, performance metrics & visualization, and distillation.
- Fluency in programming languages including but not limited to Python.
- Proficiency in PyTorch and TensorFlow.
- Consistent track record of researching/inventing and shipping advanced machine learning algorithms based on NLP.
- Outstanding communication and interpersonal skills with ability to work well in cross functional teams.
- Published research on signal processing NLP, NLU or multimodal AI is a plus.
- Being a committer or a contributor to an open source project is a plus.
Tech Stack
- Go
- Kubernetes
- Terraform
- Typescript
- React
- GraphQL
- Python
- TensorFlow
- Github
- Google Suite
- Slack
- Adobe CS
- Figma
- Notion
- gRPC
- WebRTC
- Pytorch
- JavaScript
- Elastic Search
- PostgreSQL
- Segment
- Mixpanel
- Linear
Perks & Benefits
- Work from anywhere
- Flexible PTO
- Health, Dental, & Vision
- Competitive salary
- Computer and software setup
- Company outings
The Team
- Andrew Rabinovich โ Co-Founder & CEO
- Jon Pappas โ Head of Product
- Chloe Robertson โ Product Marketing Lead
- Karim Rahemtulla โ Customer Success
- Luis Orlando Carriรณn Ortiz โ Director of Engineering
- Warren Van Winckel โ Director of Engineering
- Josh Runge โ Frontend Software Engineer
- Pedro Garzon โ Senior Machine Learning Engineer
- Ivy Chang โ Lead UX/UI Designer
- Zachary Gilbert โ Senior Software Engineer
- Mahdi Feroze โ Senior Data Engineer

What you'll do
As a Cache Developer, you will design, develop, and test software products and services across Customer and Marketing Service Technology. You will ensure that the solutions delivered are high-quality and cost-effective and meet or exceed support and operational requirements.
Requirements
- Intersystems Cache, Cache Object Script
- (COS/MUMPS), Cache Server Pages (CSP; HTTP / REST API, HTML / CSS, JavaScript / jQuery
- Nice to have experience in Deltanji/EWD
- Significant experience working with Intersystem Cache and relevant technologies.
- Intersystems Cache SQL, including all aspects of product lifecycle & process development, delivery and support.
- General knowledge in Web Development:
- HTTP / REST API
- HTML / CSS
- JavaScript / JQuery
- Demonstrated experience in agile ways of working and the ability to influence and drive adoption and a continuous improvement culture within a team.
- A results-driven approach including proactive management of problems.
- Proven experience in developing and maintaining strong relationships across business and technology teams.
- Solid understanding of relevant legislation and associated regulations
- Nice to have experience in Cloud solutions (GCP, Azure);
Tech stack
- Intersystems Cache
- Cache Object Script
- Cache Server Pages (CSP)
- COS/MUMPS
- HTTP / REST API
- HTML / CSS
- JavaScript / JQuery
- Cloud: GCP, Azure
Benefits & perks
- Our modern stack projects are the right mix of exciting and challenging.
- Gain access to our diverse range of training programs, courses, and certifications.
- Choose your work style remote, on-site or hybrid in one of our stunning offices.
- We offer the freedom of flexible working hours.
- Enhance your language skills with our corporate English classes.
- Work from anywhere and explore the world with our Workation program.

Meet Your Team
Core & Integrations Team (part of the Demand Solutions team). The Demand Solutions department builds the core technologies to help mobile publishers acquire the right users for their apps and games. The team is proud that adjoeโs software โ developed 100 percent in-house โ manages, provides, and analyzes advertisements for more than 200 million daily mobile users. adjoeโs Demand Solutions department builds modern user interfaces and software and powers its analytics in the dashboard with state-of-the-art databases like Druid. All to give advertisers and account managers the insights they need. For this we leverage machine learning models built by our BI analysts and data scientists. Besides delivering high standards, Demand Solutions remains flexible and autonomous from adjoeโs other tech teams. To develop new platform features fast, this versatile team works on adjoeโs Go backend, which feeds data to the TypeScript React frontend. As part of the Demand Solutions team, the Core & Integrations team is responsible for core logic of our advertising platform: campaign distribution to find the best combination of advertising campaigns for end users, campaign management API to optimize existing advertising campaigns and some other smaller API products. The Core & Integrations team also handles integrations with our external partners like our attribution partners โ MMPs (Appsflyer, Adjust, etc.) or aggregators working with the advertisers using our platform.
What You Will Do
- Contribute to the development of our backend written in Go and maintain our microservice architecture used to communicate with our frontend (based on TypeScript React). To do this, youโll use event buses like Kafka and SQS/SNS in order to have reliable asynchronous microservice communication.
- Work in a community of developers with whom youโll share knowledge and contribute to peer code reviews.
- Work with modern databases such as Druid โ but also with MySQL and Redis, where youโll optimize queries and the way we query data to deliver few-millisecond response times.
- Support partners by providing them with raw or aggregated data based on their business needs โ we believe in data transparency and well-documented open APIs.
- Collaborate with our Data Science team to solve complex math problems used to optimize our ML algorithms that are dedicated to delivering the right ads to the right users and integrate their solutions into adjoeโs application.
- Be responsible for collecting the billions of daily API events and aggregating them in our Kafka and Kinesis streams with the goal of querying them from the data lake in a matter of seconds.
- Be part of an international English-speaking team dedicated to scaling our adtech platform beyond our hundreds of millions of monthly active users.
Who You Are
- You have worked in software development for 5+ years
- You have gained profound experience with building web applications in Go for at least 3 years.
- You also know how work effectively with key-value databases (Redis, DynamoDB) and to optimize their use for high-volume traffic
- You know how to profile a go application and figure out bottlenecks and have used this already in your work to optimize the code of the application
- You have experience working with infrastructure as code (Terraform), Docker, and serverless infrastructure
- You have worked on a large go application with a considerable amount of traffic
- You are open to relocating to Hamburg, Germany
- You communicate confidently and fluently in English (spoken and written), as it is the working language of our international team
Tech Stack
- Go (backend)
- TypeScript React frontend
- Kafka and SQS/SNS for asynchronous messaging
- Druid, MySQL, Redis
- Terraform, Docker, and serverless infrastructure
- Data pipelines and data lake querying
Benefits & Perks
- Invest in Your Future: Regular feedback and our development program support your growth, helping you expand your skill set and achieve your career goals.
- Easy Arrival to adjoe: Visa support, relocation assistance, German language help, and a relocation bonus to make Hamburg feel like home.
- Live Your Best Life, at Work and Beyond: Hybrid setup with core office days Monday, Tuesday, and Thursday; flexible working hours; 30 vacation days; 3 weeks remote per year; free in-house gym access; mental health support via EAP.
- Thrive Where You Work: Alster lake view from the central office with top-notch equipment, open spaces, and a variety of snacks and drinks.
- Join the Community: Regular team and company events, hackathons, and social gatherings.
Your Work-Life Upgrade
- competitive salary
- quarterly team adventures
- monthly feasts on the company
- up to โฌ7000 for referred friends
- discounted city commute
- extensive visa & relocation support
- convenient hybrid work model
- continuous investments in your development
- generous vacation days
- mental health & wellness benefits

What youโll do
- Architect, design, lead and build end-to-end performant, reliable, scalable data platform.
- Be an independent individual contributor who can solve problems and deliver high-quality solutions without oversight and a high level of ownership.
- Mentor, guide and work with junior engineers to deliver complex and next-generation of features.
- Bring a customer-centric, product-oriented mindset to the table - collaborate with customers and internal stakeholders to resolve product ambiguities and ship impactful features
- Partner with engineering, product, design, and other stakeholders in designing and architecting new features
- Experimentation mindset - autonomy and empowerment to validate a customer need, get team buy-in, and ship a rapid MVP
- Quality mindset - you insist on quality as a critical pillar of your software deliverables
- Analytical mindset - instrument and deploy new product experiments with a data-driven approach
- Architect, lead and build a performant, reliable, scalable data platform.
- Monitor, investigate, triage, and resolve production issues as they arise for services owned by the team
- Create and maintain data pipelines and foundational datasets to support product/business needs.
- Design and build database architectures with massive and complex data, balancing with computational load and cost
- Develop audits for data quality at scale, implementing alerting as necessary
- Create scalable dashboards and reports to support business objectives and enable data-driven decision-making
- Troubleshoot and resolve complex issues in production environments
What you bring
- 10+ years of designing, implementing and delivering highly scalable and performant data platform.
- Experience building large-scale (100s of Terabytes and Petabytes) data processing pipelines - batch and stream
- Experience with ETL/ELT, stream and batch processing of data at scale
- Expert level proficiency in PySpark, Python, and SQL
- Expertise in data modeling, relational databases, NoSQL (such as MongoDB) data stores
- Experience with big data technologies such as Kafka, Spark, Iceberg, Datalake, and AWS stack (EKS, EMR, Serverless, Glue, Athena, S3, etc.)
- An understanding of Graph and Vector data stores (preferred)
- Knowledge of security best practices and data privacy concerns
- Strong problem-solving skills and attention to detail
- Experience/knowledge of data processing platforms such as Databricks or Snowflake.
Tech stack
- PySpark, Python, SQL
- Big data technologies: Kafka, Spark, Iceberg, Datalake
- AWS stack: EKS, EMR, Serverless, Glue, Athena, S3
- Databricks or Snowflake (experience)
- Graph and Vector data stores (preferred)
- Security and data privacy best practices
Team description
Join our Data Platform team, which is responsible for developing and maintaining scalable data platforms that power Checkrโs fair and safe hiring decisions. The centralized data platform is the heart of all key customer-facing products, and youโll work on high-impact projects contributing to our next-generation products.
What you get
- A fast-paced and collaborative environment
- Learning and development allowance
- Competitive cash and equity compensation and opportunity for advancement
- 100% medical, dental, and vision coverage
- Up to $25K reimbursement for fertility, adoption, and parental planning services
- Flexible PTO policy
- Monthly wellness stipend
The role
Youโll join a lean, horizontal team building our internal revenue optimization platform. Your mission: Build and own the web application that serves as the mission control for our monetization efforts. From complex experiment configurations to high-performance analytics dashboards, you will create the tools that allow our team to move faster and scale winning strategies across our app portfolio.
Youโll be making core architectural decisions, shaping API contracts with the backend team, and ensuring our internal tooling is as polished and intuitive as a consumer product.
What you'll do
- Build and own the web app for our monetization and experiments platform.
- Refine and implement complex experiment configuration interfaces (parameter configs, audience segmentation, etc.).
- Build analytics dashboards and data visualizations that turn raw numbers into actionable insights.
- Build platform admin tooling to streamline operations.
- Work closely with Monetization and Business teams to translate requirements and prototypes into intuitive, high-performance UI.
- Shape API contracts together with backend engineers to ensure seamless data flow.
- Own frontend architecture and component design, ensuring a scalable and maintainable codebase.
What we're looking for
- 5+ years of commercial frontend development experience.
- Expertise in React and modern JavaScript/TypeScript.
- Strong proficiency in TypeScript, CSS, and HTML - you write clean, type-safe, and performant code.
- Proven track record of building complex, data-rich web applications.
- Solid understanding of state management for non-trivial business logic (Redux, Zustand, React Query, etc.).
- Eye for UI/UX - you care about how things look and feel, not just that they work.
- Experience with charting/data visualization libraries (Recharts, D3, Chart.js, etc.).
- Experience with modern frontend tooling (Vite, Webpack) and CI/CD pipelines.
- High ownership, strong judgment, and the ability to execute independently.
Bonus points
- Experience with experimentation platforms, A/B testing, feature flags, or mobile app monetization.
- Experience building or contributing to design systems.
- Experience building production-grade, custom charting libraries
Youโll fit here ifโฆ
- Youโre not waiting to be told what to do - you see the usability gap and solve it.
- You care about building clean components, but you care even more about the user's ability to get the job done.
- Youโve shipped real production systems (and dealt with the edge cases of complex data sets).
- You prefer fast decisions, high trust, and a team that expects you to think like an owner.
- You want to build the "brain" of a system that directly supports revenue growth.
Team
Youโll join a lean, horizontal team building our internal revenue optimization platform.
Tech stack
- React and TypeScript
- CSS/HTML with a focus on clean, accessible UI
- State management: Redux, Zustand, React Query, etc.
- Charting/data visualization libraries: Recharts, D3, Chart.js
- Frontend tooling: Vite, Webpack
- CI/CD pipelines
Location / Benefits
Remote-friendly ยท Fully Remote