Latest jobs
Senior Automation QA Engineer (Python) - Job Description
About This Role
Senior Automation QA Engineer (Python), worldwide and fully remote, responsible for designing, implementing, and maintaining automated tests to ensure continuous quality of complex products. You will write Python-based test scripts, expand automated coverage across APIs, back-end, and data flows, and collaborate with business analysts, product owners, backend/frontend engineers, and architects. The role emphasizes strong QA fundamentals (Test Pyramid, BDD/ATDD), REST API and database validation, performance/load testing, CI/CD integration, and working within Agile/Scrum teams.
Responsibilities
- Write and maintain automation scripts in Python to increase automated test coverage across the product.
- Collaborate with business analysts, product owners, backend and frontend engineers, and architects to clarify requirements and ensure product quality.
- Plan, create, and manage Test Plans, Test Cases, and Regression Sets.
- Use a variety of automation tools and frameworks to plan, execute, and report on tests.
- Perform REST API testing using REST libraries, Postman, and curl.
- Validate data and performance of RDBMS queries using SQL and related tools.
- Apply BDD and ATDD practices to define and automate acceptance criteria.
- Conduct performance and load testing (e.g., jMeter) and analyze the results.
- Integrate automated tests into CI/CD pipelines and support continuous testing.
- Use JIRA, Confluence, and Test Management Systems to track defects, coverage, and test execution.
- Participate in Agile/Scrum ceremonies and contribute to continuous improvement of QA processes.
Requirements
- 4+ years of hands-on AQA (Automation QA) experience.
- Strong teamwork skills and ability to interact productively with end users, analysts, and customers in a diverse team environment.
- Solid understanding of the Test Pyramid and test types (unit, component, integration, functional, regression, etc.).
- Experience creating and managing Test Plans, Test Cases, and Regression Sets.
- Good knowledge of and experience with Python (primary programming language) and related libraries.
- Hands-on REST API testing experience with REST libraries, Postman, and curl.
- Good SQL skills and experience testing RDBMS queries and their performance.
- Practical experience with BDD and ATDD.
- Performance and load testing experience (e.g., jMeter).
- Good understanding of CI/CD approaches and related tools.
- Experience with project management and documentation tools such as JIRA and Confluence.
- Experience with Test Management Systems (TMS).
- Experience working in Agile/Scrum teams.
- English at B2 level or above (implied by job template).
Nice to Have
- Experience with other programming languages (JavaScript, Java) and additional AQA tools.
- Experience with RDBMS migration tools such as Liquibase, Flyway, or Pyway.
- Automated security testing experience.
Benefits
- Competitive salary
- Remote work opportunity
- Comfortable work in your local time zone
- Flexible work schedule
- Professional growth and development
- Multicultural working environment
What You'll Do / Who You Are / Tech Stack / Team Description
Role emphasizes designing, implementing, and maintaining automated tests in Python, expanding coverage across APIs, back-end, and data flows; collaborating with cross-functional teams; applying QA methodologies (Test Pyramid, BDD/ATDD); performance/load testing; CI/CD integration; Agile/Scrum participation; and using tools like REST libraries, Postman, SQL, jMeter, JIRA, Confluence, and various TMS.

As a Staff Forward Deployed Engineer at GitLab
This role is focused on strategic accounts, where you will help customers adopt GitLab and the GitLab Duo Agent Platform in complex enterprise environments, including self-managed, regulated, and constrained deployments. You will guide deep technical discovery, design practical adoption paths, and build reusable solutions that help customers move from early platform use into broader CI/CD, security, compliance, and AI-enabled workflows. This is not a traditional consulting role centered on one-off delivery. Instead, you will use customer issues to create durable technical assets, shape architecture patterns, and influence upstream product and engineering decisions when field needs point to a broader solution. Your work will help reduce time to value for strategic customers while improving how GitLab scales adoption across similar environments.
What you'll do
- Conduct deep technical discovery in selected strategic accounts to assess platform readiness, evaluate constraints, and identify high-value adoption opportunities across GitLab and GitLab Duo Agent Platform.
- Lead architecture and delivery design for complex enterprise environments where platform migration, regulated requirements, and product boundaries intersect.
- Partner with customer stakeholders and GitLab account teams to prioritize use cases based on business impact, technical feasibility, repeatability, and long-term platform value.
- Design and build bounded proofs, prototypes, deployment patterns, and reusable accelerators across source code management, CI/CD, security, compliance, and AI-enabled workflows.
- Architect self-managed and enterprise deployments, including runners, access controls, network boundaries, observability, AI Gateway, model connectivity, and governance controls.
- Turn recurring field patterns into reusable assets such as runbooks, templates, design notes, technical guidance, product briefs, and reference architectures that can be used across accounts.
- Contribute code, technical designs, or architecture changes when strategically necessary, in partnership with product and engineering, to address blockers that should be solved upstream.
- Travel as needed for strategic customer engagements, architecture workshops, and team coordination, with expected travel up to 50%.
What you'll bring
- Experience in software engineering, platform architecture, forward deployed engineering, technical consulting, or similar customer-facing engineering roles.
- Strong software engineering fundamentals, including the ability to read, reason about, and contribute to production systems, ideally with experience in Ruby on Rails and/or Go.
- Strong systems design and software architecture skills, with experience evaluating APIs, asynchronous workflows, CI/CD systems, security boundaries, scalability, and operational tradeoffs.
- Hands-on experience with GitLab CI/CD, pipeline design, YAML, runners, and GitLab APIs.
- Experience with infrastructure as code and enterprise deployment tooling such as Terraform, Ansible, Helm, or similar approaches.
- Working knowledge of large language models, agentic patterns, tool orchestration, and the practical limits of AI systems in production environments.
- A track record of creating reusable technical assets that outlive a single engagement, along with strong written and verbal communication skills for technical design and architecture guidance.
- Comfort leading conversations with senior stakeholders across security, compliance, engineering, platform, and business teams, especially in ambiguous enterprise environments.
Team description
The Staff Forward Deployed Engineer works closely with teams across Customer Success, Solutions Architecture, product, and engineering to solve hard adoption problems in enterprise customer environments. We focus on high-leverage technical work for strategic customers, especially when standard approaches do not fully fit the environment or the adoption challenge. We are especially valuable in situations where self-managed deployment models, security and compliance requirements, migration complexity, or AI platform adoption create barriers that standard approaches do not fully address. Success on our team comes from strong technical judgment, a bias for reuse, and the ability to balance immediate customer needs with long-term platform impact.
Tech stack & capabilities (highlights)
- Ruby on Rails and/or Go
- GitLab CI/CD, pipeline design, YAML, runners, GitLab APIs
- Infrastructure as code: Terraform, Ansible, Helm
- AI/LLM concepts, agentic patterns, tool orchestration
Benefits and other details
- Benefits to support your health, finances, and well-being
- Flexible Paid Time Off
- Team Member Resource Groups
- Equity Compensation & Employee Stock Purchase Plan
- Growth and Development Fund
- Parental leave
- Home office support

About the Role
The Site Reliability Engineering team in CaptivateIQ operates across the engineering organization, supporting our development teams by providing them with the tools and processes they need to get their job done well. We ensure that the service provided by our product is great for the paying customers and when it isnโt we ensure that the business is well informed. We do this by providing infrastructure, platform, reliability, and observability support to our internal customers to help them achieve their goals. The team are thoughtful and pragmatic engineers who balance doing things right versus doing things right now. We invest in iterative efforts to refine or pivot our work, deliver real-world results, and reflect on the process in order to improve it incrementally. We are fully remote and invest in written communication for long term institutional memory while valuing the synchronous time we have together in order to build and strengthen our relationships.
What you'll do
- Learn by reading and writing designs, documentation, runbooks, and industry literature
- Partner with development teams to design and implement reliable and resilient services
- Build infrastructure automation thatโs easy to use by other teams
- Develop observability processes, reports, and tooling to diagnose performance and stability issues
- Eliminate toil by automating manual processes
- Ensure we exceed our compliance and security commitments
- Act in an ethical and professional manner
Requirements
- 5+ years of experience in Software Engineer, SRE or DevOps roles
- Strong written and verbal communication skills (We use Slack, Notion, and Github)
- Experience with Infrastructure as Code (We use Terraform and AWS)
- Experience with containers and container orchestration tools (We use ECS)
- Experience with authoring and maintaining code (We use Bash, Python, and Golang)
- Experience with using and helping others with observability tools and techniques (We use Datadog)
- Love for the Oxford comma (We use, love, and respect it)
Nice to Haves
- Experience with cloud cost management and FinOps
- Experience in building, maintaining, and operating SaaS or Web based applications
- Experience with distributed system principles their application
- Experience building and operating multi-region or cell based applications
- Experience with managing cloud vendor relationships
- Experience with compliance and regulated environments (We use SOC2 and HIPAA)
Benefits
- (US-ONLY) 100% of medical, dental, and vision covered including 75% for dependents
- vacation days and quarterly mental health days so you can recharge
- US-ONLY) 401k plan to participate in and save towards the future
- Apple products to help you do your best work
- Resource Groups (ERGs) to support and celebrate the shared identities and life experiences of communities within CaptivateIQ
- ERGs directly support our company-wide DEI goals as a space for developing and retaining diverse talent
Team description
We are a fully remote team that values written communication for long-term institutional memory and synchronous time to build strong relationships. The SRE team collaborates with development teams to design reliable services, automate infrastructure, and maintain observability and security standards across the organization.

Job responsibilities
- Manage end-to-end ML projects: problem definition โ solution โ testing โ deployment โ support.
- Work with data engineers to build datasets and define data requirements, and assess feasibility, risks, and constraints
- Work with data analysts to design and analyze A/B tests: metrics, splits, interpretation of results, and recommendations for deploying solutions.
- Develop and train models (classic ML + DL), including solutions for text and image embeddings; conduct offline evaluation and error analysis.
- Deploy the model and code to production (Python service), support releases and integrations.
- Be responsible for model quality post-launch: metrics, monitoring, drift/degradation, improvement plans, and support procedures.
Key qualifications
- 4+ years of experience as a Data Scientist (with specific experience in search / ranking / recomendations tasks).
- Experience managing end-to-end ML projects in production (from setup to support).
- Excellent understanding of classical ML: feature engineering, boosting, classification/regression, cross-validation, threshold selection, calibration.
- Experience with DL (PyTorch/TensorFlow): understanding of fine-tuning principles and model inference.
- Python (production-grade): readable code, tests for critical components, understanding of model/artifact packaging and service integration.
- Understanding of ML monitoring: quality metrics, drift, alerts, diagnostics, and support procedures.
- SQL proficiency sufficient for independent dataset building (joins, window functions).
- Experience with model interpretability and error analysis.
- MLflow / W&B / DVC or similar experiment tracking tools.
- Orchestration/pipelines (Airflow/Prefect/Dagster) and advanced data processing.
- English at the B2 (upper-intermediate) level.
We offer you
- Flexible schedules and opportunity to work remotely;
- Ambitious and supportive team who love what they do, appreciate each other, and grow together;
- Internal programs for adaptation and training, development of soft skills, and leadership abilities;
- Partial compensation for participating in external training and conferences;
- Corporate English school: Group and individual lessons, speaking clubs with colleagues from all over the world;
- Corporate prices on hotels and travel services;
- MyTime Day Off - an extra non-working day without loss of compensation.
Who you are
Technology / team details

Senior HTML/Markup Developer [Armenia]
Yerevan, Armenia
Responsibilities
- Develop efficient, fast, and adaptive layouts of interfaces;
- Collaborate with developers and web designers to improve the product;
- Introduce ideas, solutions, and optimize existing applications.
Requirements
- 5+ years of professional experience as an HTML coder;
- Excellent knowledge of HTML and CSS;
- Good understanding of UI/UX design and cross-browser layout;
- Strong knowledge of CSS/JS animation;
- Experience working with CSS Pre-Processors: Sass/Less;
- Understanding of web application performance optimization;
- Experience with Figma / Sketch / Photoshop;
- Advanced level of English.
Nice to Have
- Understanding of front-end build tools (Gulp, Webpack, etc.);
- Knowledge of HTML email templates;
- Experience working with CSS frameworks;
- Experience working with template engines (Pug, Handlebars, etc.);
- Experience with Canvas API / WebGL / ThreeJS / GSAP.
We offer excellent benefits, including but not limited to
- Learning and development opportunities and interesting, challenging tasks;
- Official employment in accordance with Armenian labor laws, with the possibility of registering family members;
- Relocation package (flight tickets + 2-week hotel stay);
- Language development support and partial compensation for classes;
- Birthday celebration gift;
- 20 working days of Annual Vacation for proper rest.

Description
About The Team
Launched in 2019, Constructor is an AI-first ecommerce search and discovery platform that helps shoppers find the right products at the right time and enables leading global e-commerce brands to drive meaningful revenue and conversion gains.
As a Backend Engineer in the Attribute Enrichment team, you will improve the e-commerce experience for hundreds of millions of users across the world by designing, building, and maintaining scalable services that deliver enriched items, metadata, and attributes to end users. Youโll work across key services like Attribute Enrichment and Badges, managing a dedicated database and developing APIs that integrate with Search and Browse.
You will collaborate closely with ML engineers to develop and optimize the Attribute Enrichment service, ensuring its scalability, reliability, and performance. You will build the CI/CD and observability systems from scratch, as well as maintain and improve existing mature systems.
What you'll do
- Build a new service to deliver ML-generated enriched attributes to our customers
- Design a high-throughput, low-latency Badges service for heavy traffic
- Develop Constructor's Attribute Enrichment product and Badges product features
- Deploy high-available services in the cloud and implement CI/CD pipelines following best industry standards (AWS, Jenkins, GitHub Actions)
- Set up service observability, monitoring metrics, and alerting (Prometheus, Grafana, PagerDuty, AWS CloudWatch)
- Work with a dedicated database to manage enriched items, their metadata, and derived attributes for our customer dashboard application, ensuring data consistency, performance, and availability for downstream services and APIs
- Write and maintain unit, integration, and end-to-end tests for backend services to ensure code quality and service reliability
Team
Team description and context are included above in the About The Team section.
Requirements
- 5+ years of experience
- Strong computer science background & familiarity with networking principles
- Proficiency in Python and backend development patterns
- Experience in designing, developing & maintaining highload real-time services and public APIs
- Experience with NoSQL and relational databases, distributed systems, and caching solutions would be a plus
- Experience with any compiled programming language (e.g. Go, Rust) would be a plus
- Experience writing unit and integration tests for backend services using frameworks such as Pytest, unittest, or equivalent
- Experience collaborating in cross-functional teams
- Excellent English communication skills
Benefits
- ๐๏ธ Unlimited vacation time - we strongly encourage all employees to take at least 3 weeks per year
- ๐ Fully remote team - choose where you live
- ๐๏ธ Work from home stipend - we want you to have the resources you need to set up your home office
- ๐ป Apple laptops provided for new employees
- ๐งโ๐ Training and development budget - refreshed each year for every employee
- ๐ช Maternity & Paternity leave for qualified employees
- ๐ง Work with smart people who will help you grow and make a meaningful impact
- ๐ต Base salary: $80kโ$120k USD, depending on knowledge, skills, experience, and interview results
- ๐ Stock options - offered in addition to the base salary
- ๐ Regular team offsites to connect and collaborate
Diversity, Equity, and Inclusion at Constructor
At Constructor.io we are committed to cultivating a work environment that is diverse, equitable, and inclusive. As an equal opportunity employer, we welcome individuals of all backgrounds and provide equal opportunities to all applicants regardless of their education, diversity of opinion, race, color, religion, gender, gender expression, sexual orientation, national origin, genetics, disability, age, veteran status or affiliation in any other protected group.
Studies have shown that women and people of color may be less likely to apply for jobs unless they meet every one of the qualifications listed. Our primary interest is in finding the best candidate for the job. We encourage you to apply even if you donโt meet all of our listed qualifications.
Job responsibilities
- Analyze the current SAP GTS landscape and define the strategy for migration and full-cycle implementation (Blueprint to Hypercare).
- Configure and optimize key SAP GTS modules: Compliance Management, Customs Management, and Risk Management.
- Lead fit-gap analysis, solution design, and process improvements aligned with legal and trade compliance requirements.
- Ensure seamless integration between SAP GTS, SAP S/4HANA, and external systems.
- Develop and execute a robust data migration plan, ensuring accuracy and completeness.
- Coordinate with cross-functional teams (IT, Trade Compliance, Logistics, Finance) to align business and system processes.
- Support testing, defect resolution, user training, and documentation, while providing expert guidance on SAP best practices and future roadmap.
What you'll do
See responsibilities above.
Requirements
- 3+ years of functional consulting experience with SAP GTS, including migration or greenfield implementation.
- Strong understanding of international trade processes and regulatory compliance.
- Experience integrating GTS with S/4HANA systems.
- Excellent communication and documentation skills.
- Experience with SAP GTS Edition for SAP HANA.
- Familiarity with customs integration tools and third-party systems.
- English B2+
Nice-to-have skills
- Knowledge of German.
- Experience with both legacy and new editions of SAP GTS.
Qualifications
Not explicitly separated; see requirements above for qualification criteria.
Benefits
- 89% of projects use the newest SAP technologies and frameworks.
- Expert communities and internal courses.
- Valuable perks to support your growth and well-being.
- Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one.
- Healthy work atmosphere: On average, our employees stay in the company for 4+ years.
Perks
Same as benefits listed above.
Team description
At LeverX, we have had the privilege of working on over 950 SAP projects, including some with Fortune 500 companies. With 20+ years in the market, our team of 2,200 is strong, reliable, and always evolving: learning, growing, and striving for excellence.
Tech stack
SAP GTS, SAP S/4HANA, SAP GTS Edition for SAP HANA.
Location
Worldwide

Data Scientist
Responsibilities
- Solve real world problems using Data Science and statistical techniques
- Implement functionality which can be served in production for internal customers as well as external customers
- Designing, building and maintaining data sets
- Data cleaning & modeling
- Feature engineering
- Feature extraction
- Building end-to-end data & machine learning pipelines
- Conduct reproducible research
- Collaborate with Engineering, Operations, Product Management and other functions in the company to deliver algorithmic solutions
- Apply software engineering practices in our code that implements our research and its infrastructure
- Produce high quality, clean, maintainable reproducible research and code
Requirements
- High proficiency in Python and its data science stack
- Background and experience in data/backend engineering, ideally in production environments (3+ years) (Mid/Senior Level role)
- Background and hands-on experience (2+ years) in implementing research and algorithms in Python, specifically in information retrieval, text processing, NLP, and machine learning
- Experience with developing AI solutions across a variety of domains
- Track record of good written and verbal communication of complex things in a simple way as well as ability to collaborate well with people from different backgrounds and professions
- A Bachelorโs degree or higher
- Must have hands on experience working with SQL
- Must have hands on experience working with Python (Preferably with Pandas)
- Must be strong at applying statistical methods to data
- Must be strong with data pipelining
- Must be an independent thinker and have the ability to work independently to solve problems
Total Rewards
Our workforce deserves fair and competitive pay that meets them where they are. With scalable benefits, rewards, and perks, our total rewards programs reflect our commitment to inclusivity and access for all.
Some things youโll enjoy
- Stock grant opportunities dependent on your role, employment status and location
- Additional perks and benefits based on your employment status and country
- The flexibility of remote work, including optional WeWork access
Who you are
At Deel, weโre an equal-opportunity employer that values diversity and positively encourage applications from suitably qualified and eligible candidates regardless of race, religion, sex, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, pregnancy or maternity or other applicable legally protected characteristics.

About Wheely
Wheely is redefining premium transportation across major cities in Europe, the US, and the Middle East. We blend cutting-edge technology with the craft of five-star chauffeuring to deliver an experience trusted by more than 100,000 active riders and 1,200 corporate accounts. Weโre a profitable, fast-growing scale-up with $43M raised and over $100M in annual revenue. Having recently launched in New York City, weโre expanding rapidly across the US and EMEA. If you take pride in your craft and want to help shape the next chapter of our growth, we'd love to hear from you.
About the role
Our Marketplace team builds the models and algorithms that balance supply and demand, optimising pricing and matching to ensure chauffeurs earn and passengers aren't left waiting. Weโre looking for a Mid/Senior Backend Engineer to join a team that keeps frameworks lean and focuses on what matters: clean, maintainable code, shipped fast with TDD, DDD, and continuous integration and delivery. We are a Go shop, and while weโre busy migrating away from our Ruby monolith, our stack includes PostgreSQL, MongoDB, RabbitMQ, Redis, gRPC, and Thrift. Everything runs on AWS and Kubernetes, managed via Terraform. Our interview process includes a recruiter screen, algorithms, live coding and a system design. Senior+ candidates also complete a structured review of past experience and achievements.
Responsibilities
- Write high-quality, performant code primarily in Go.
- Implement new microservices while helping us responsibly manage and migrate away from legacy services.
- Work closely with product managers, designers, and data scientists to turn abstract requirements into concrete technical designs.
- Ensure our systems stay responsive under heavy load, optimising for both latency and reliability.
Requirements
- 3+ years of experience (5+ years for seniors) building and maintaining scalable backend services. We use Go. If you know it, great. If not, weโll interview you in your strongest language (Python, C++, Java, Ruby, etc.). We hire for engineering fundamentals, not syntax.
- In-depth knowledge of relational and NoSQL databases (PostgreSQL, MongoDB, Redis) and experience with message brokers like RabbitMQ or Kafka.
- Upper-Intermediate (B2) English proficiency or higher. You should be comfortable debating technical trade-offs with your peers.
What we Offer
- Office-based role in Nicosia, four days a week with flexible start and finish times, plus one remote day of your choice
- Competitive salary
- Employee stock options plan
- Private medical and dental insurance
- Daily Lunch allowance
- Latest-generation MacBook Pro and 4k display
- Professional development stipend
- Relocation support, including visa sponsorship and allowance
Who you are
Not explicitly listed beyond requirements and responsibilities; see requirements and responsibilities above.
Tech stack
Go, PostgreSQL, MongoDB, Redis, RabbitMQ, gRPC, Thrift; AWS; Kubernetes; Terraform; with some Ruby monolith migration ongoing.
What youโll do
AI Governance & Enablement โ Develop and maintain a practical framework for evaluating, approving, and securely deploying AI tools across the organization. Assess data exposure risks, establish acceptable use guidelines, and help teams adopt AI confidently โ not fearfully.
Vulnerability Management โ Own our vulnerability management program โ scanning, triaging, coordinating remediation, and tracking resolution across infrastructure, applications, and endpoints.
Compliance โ Support and improve our compliance posture (SOC 2, ISO 27001), including evidence collection, control monitoring, and audit support. Ensure AI usage aligns with our regulatory and contractual obligations.
Incident Response โ Lead security incident response โ investigate alerts, coordinate containment, document root causes, and drive improvements.
Security Tooling โ Manage and tune security tooling (EDR, SIEM/logging, DLP, email security, identity and access management controls).
Vendor & Third-Party Risk โ Conduct security reviews of third-party vendors, SaaS integrations, and AI services โ evaluating data handling, model training policies, and privacy commitments.
Policy & Standards โ Develop and maintain security policies, standards, and runbooks that are practical and right-sized for our environment โ including clear, usable AI usage policies that people actually follow.
Application Security Partnership โ Partner with Platform Security and Engineering on application security topics โ advising on secure architecture, reviewing configurations, and supporting penetration testing efforts.
Security Awareness โ Drive security awareness initiatives โ phishing simulations, training programs, AI literacy education, and ongoing guidance for the team.
Threat Intelligence โ Monitor and assess emerging threats (including AI-driven attack vectors), and translate them into actionable recommendations for leadership.
Who you are
4+ years of experience in information security, cybersecurity, or a related technical discipline.
A pragmatic, enabling mindset toward AI โ you understand the risks but you're not reflexively restrictive. You've thought critically about how organizations can use AI tools like LLMs, coding assistants, and automation responsibly.
Hands-on experience with compliance frameworks (SOC 2, ISO 27001) โ you've been through audits and know how to keep controls healthy.
Strong knowledge of cloud security fundamentals (AWS, GCP, or similar), endpoint protection, and identity/access management.
Experience with security tooling โ EDR, SIEM, vulnerability scanners, DLP, and email security platforms.
Solid understanding of incident response processes and the ability to stay calm under pressure.
Familiarity with SaaS environments, remote-first operations, and the security challenges that come with them.
Strong written communication skills โ you can write a clear policy, a concise incident report, and a Slack message that people actually read.
Self-starter mentality โ you're comfortable working autonomously and prioritizing across competing demands.
Experience evaluating AI/ML tools for data privacy and security risks is a strong plus.
Experience in vendor risk assessment and third-party security reviews.
Security certifications (CISSP, CISM, CompTIA Security+, or similar) are a plus but not required.
What you'll get
Compensation & Benefits: Starting salary for this role is $151,000 to $170,000 (or equivalent in local currency) depending on experience and subject to market rate adjustment. Our inclusive benefits package supports your well-being and growth, including 100% coverage of medical, dental, vision, mental health, and supplemental insurance premiums for you and your family. We also offer 16 weeks paid parental leave, unlimited PTO, stipends for remote work and wellness, a professional development budget, and more. See full benefits here โ
Team and environment
As our first dedicated InfoSec hire, you'll be the go-to person for securing our organizational systems, data, and operations across a globally distributed, remote-first company. Reporting to the VP of Operations, you'll work closely with IT, Compliance, and Platform Security to protect customer data, maintain our compliance posture, and help the company adopt AI tools thoughtfully and securely. This is an experienced individual contributor role โ you'll be hands-on with tooling and policy, not managing a team. We're a company that embraces AI โ we use it in our product and want our team to use it to do their best work. We need someone who sees AI as an opportunity to enable, not just a risk to lock down.

Senior Backend Developer (Golang/Java) - TradingView
TradingView is the worldโs #1 platform for all things investing. 100M+ users trust us to inform their trading decisions. Want to make an impact? Apply now โ help shape the future of finance.
What you'll do
- Design, implement, and maintain backend services for data storage and enrichment using Go (Java as plus).
- Build and maintain pipelines for processing, validating, and enriching trading data.
- Optimize data storage schemas, queries, and performance in PostgreSQL.
- Collaborate with other engineering teams to integrate services and maintain system reliability.
- Participate in architectural discussions and contribute to long-term platform design.
- Write high-quality, maintainable, and tested code; review peersโ work.
- Support monitoring, troubleshooting, and incident resolution.
What makes you the perfect fit
- 5+ years of professional backend development experience, with strong proficiency in Go and Java (as plus).
- Experience with relational databases, preferably PostgreSQL, including schema design, query optimization, and performance tuning.
- Strong understanding of data pipelines, batch and near-real-time processing.
- Experience designing fault-tolerant, high-load backend systems.
- Familiarity with distributed systems and microservices architecture.
- Excellent problem-solving skills and ability to collaborate with multiple teams.
- Will be a plus: Experience with data enrichment, aggregation, or transformation pipelines.
- Knowledge of caching layers, indexing, or time-series data storage.
- Familiarity with streaming systems (Kafka, RabbitMQ, or similar).
- Experience in financial/trading domain.
- Exposure to cloud environments, containerization, and CI/CD practices.
What we offer you
- Flexible working hours and a hybrid work format.
- Well-equipped offices for focused and collaborative work.
- A global, distributed team of 500+ professionals.
- Learning, mentorship, and long-term career growth.
- Relocation support and private health insurance.
- Performance-based bonuses.
- TradingView Premium access.
- Regular team events and company-wide meetups.
Who you are
- 5+ years of professional backend development experience with Go and Java (as plus).
- Strong proficiency in relational databases (PostgreSQL) and data pipeline concepts.
- Experience designing and working with distributed systems and microservices.
- Excellent collaboration and problem-solving skills.
Tech stack
- Go (and optionally Java)
- PostgreSQL
- Data pipelines and streaming systems (Kafka, RabbitMQ, or similar)
- Cloud environments, containerization, and CI/CD practices
Team description
We are a global, distributed team within TradingView, collaborating across engineering teams to build and maintain the platform used by millions of traders and investors worldwide.

Job title: Senior IaaS / Kubernetes Platform Engineer (worldwide remote, work anywhere)
CloudLinux is a global remote-first company. We are driven by our principles: do the right thing, employees first, remote first, and we deliver high-volume, low-cost Linux infrastructure and security products that help companies to increase the efficiency of their operations. Every person on our team supports each other and does what we can to ensure we all are successful.
We are looking for a Senior IaaS / Kubernetes Platform Engineer to join our Infrastructure Department and become a key contributor to the design, implementation, and operation of our private cloud and multi-tenant Kubernetes platform.
Our infrastructure powers 500+ VMs across multiple datacenters, serving 20+ engineering teams. We are in the process of evolving from an OpenNebula-based virtualization platform toward a Kubernetes-native multi-tenant cloud with KubeVirt for VM orchestration โ while maintaining reliability and operational excellence throughout the transition.
You will work alongside the existing IaaS Tech Lead and Network Engineer, and must be capable of independently owning and operating the full IaaS stack (compute, storage, networking, bare metal) if needed. This is not a "Kubernetes-only" role โ it requires deep infrastructure generalist skills combined with Kubernetes platform expertise.
What You Will Do
Kubernetes Platform Engineering (Primary Focus โ 40%)
- Design, build, and operate a multi-tenant Kubernetes platform using Cluster API (CAPI) with bare-metal providers (Metal3/Sidero).
- Implement hard multi-tenancy using vCluster (Loft Labs) or similar technology, providing isolated Kubernetes API servers per tenant.
- Deploy and manage KubeVirt for VM orchestration within Kubernetes, including CPU pinning, NUMA awareness, and HugePages configuration.
- Implement GitOps-driven infrastructure using ArgoCD or Flux as the single source of truth for all cluster configurations.
- Deploy and manage Policy-as-Code using Kyverno or OPA Gatekeeper for admission control, resource quotas, and security policies.
- Build self-service capabilities using Crossplane or similar Kubernetes-native infrastructure provisioning tools.
Storage Engineering (20%)
- Operate and optimize Ceph distributed storage clusters (currently 1 PiB raw, 149 OSDs, Quincy 17.2.5).
- Manage Rook-Ceph operator deployments at scale on modern Kubernetes (v1.28+).
- Implement storage tiering: Ceph for bulk storage, local NVMe for high-IOPS workloads, LINSTOR/DRBD or TopoLVM for ultra-fast replicated storage.
- Design and implement per-VM / per-tenant I/O isolation on shared Ceph clusters.
- Manage CDI (Containerized Data Importer) for VM image lifecycle in KubeVirt environments.
Networking (15%)
- Deploy and manage overlay networks for pod networking, micro-segmentation, and WireGuard/IPsec encryption.
- Implement Cluster Mesh for multi-datacenter pod-to-pod connectivity.
- Configure Multus CNI and SR-IOV for multi-NIC VM support in KubeVirt.
- Work with physical network infrastructure: Juniper switches (JunOS), BGP (eBGP/iBGP), EVPN/VXLAN, VLANs.
- Maintain IPSec site-to-site connectivity between datacenters.
Reliability and Operations (15%)
- Practice SRE discipline: define and maintain SLOs with error budgets, implement proactive capacity management with 6-12 month forecasting.
- Design and execute chaos engineering experiments to validate system resilience.
- Participate in on-call rotation for IaaS infrastructure (OpenNebula, Ceph, networking).
- Write and maintain runbooks, DRP documentation, and postmortem analyses.
- Drive proactive improvement: identify reliability risks, performance bottlenecks, and toil โ then propose and implement solutions without waiting for incidents.
Infrastructure as Code and Automation (10%)
- Develop and maintain Terraform/OpenTofu modules for multi-cloud infrastructure provisioning.
- Write Ansible playbooks for bare-metal server configuration and fleet management.
- Automate infrastructure lifecycle: PXE boot images, hardware provisioning (Foreman), IPMI management.
- Implement FinOps practices: cost attribution, resource utilization analysis, right-sizing recommendations using OpenCost/Kubecost.
Requirements
Must have
- 5+ years in infrastructure/platform engineering roles, with at least 3 years operating production Kubernetes clusters (not just deploying apps on K8s, but building and managing the platform itself).
- Production experience with at least 3 of the following:
- KubeVirt or similar VM-on-K8s technology
- Cluster API (CAPI) for declarative cluster lifecycle management
- Cilium or Calico (advanced CNI with eBPF or BGP integration)
- Rook-Ceph or other Kubernetes storage operators at scale (100+ OSDs) โ ArgoCD or Flux for GitOps-driven infrastructure management
- Deep Linux systems knowledge: kernel tuning, networking stack (iptables/nftables, routing, bonding, VLAN), filesystem operations, performance troubleshooting.
- Ceph distributed storage experience: cluster operations, OSD lifecycle, pool management, performance tuning, troubleshooting degraded states.
- Infrastructure as Code: Terraform/OpenTofu + Ansible at production scale.
- Bare-metal infrastructure experience: IPMI/iDRAC, PXE boot, RAID configuration, hardware diagnostics, datacenter operations.
- Networking fundamentals: BGP, VLAN, IPSec/WireGuard, DNS, load balancing.
- Strong written and verbal English (B2+ minimum) โ documentation, postmortems, and cross-team communication are in English.
- Proactive mindset: demonstrated history of identifying problems before they become incidents and driving improvements without being asked.
Nice to have
- Experience building multi-tenant Kubernetes platforms (vCluster, Capsule, or custom namespace isolation).
- Crossplane or similar Kubernetes-native infrastructure abstraction.
- Policy-as-Code: Kyverno, OPA Gatekeeper, or Kubewarden.
- Container security: image signing (Sigstore/cosign), runtime security (Falco), sandboxed execution (Kata Containers, gVisor).
- SRE practices: SLO/SLI design, error budget policies, chaos engineering (LitmusChaos, Chaos Mesh), incident management frameworks.
- FinOps: OpenCost, Kubecost, cloud cost optimization.
- Immutable OS experience: Talos Linux, Flatcar Container Linux, or similar.
- OpenNebula experience (we are migrating FROM it, so understanding it accelerates the transition).
- Experience with LINSTOR/DRBD or TopoLVM for local high-performance storage.
- SR-IOV and DPDK experience for hardware-accelerated networking.
- Experience migrating from traditional virtualization (VMware, OpenNebula, Proxmox) to Kubernetes/KubeVirt.
- Grafana LGTM stack (Mimir, Loki, Tempo) for observability.
- Compliance environment experience (SOC2, ISO 27001, NIS2).
- Go or Python programming for infrastructure tooling.
- Experience with Juniper JunOS switch configuration.
What weโre looking for
- Proactive mindset. Our current IaaS workload is still around 50% unplanned work, including incidents and ad hoc support requests. Weโre looking for someone who can reduce that through better automation, preventive controls, and more resilient systems.
- Platform-minded. You look for ways to replace repetitive support work with scalable solutions, for example, building self-service workflows instead of provisioning VMs manually, or introducing automated QoS policies instead of handling limits case by case.
- Able to work across the current and future stack. We operate OpenNebula and Ceph today while moving toward a Kubernetes-native platform. This role requires someone who can keep the current environment reliable while helping build the next stage in a practical way.
- Transparent in communication. We value technical discussions, architectural decisions, and incident reviews happening in shared channels and documented formats. That includes ADRs, postmortems, and clear written updates.
- Focused on knowledge sharing. You document your work, write runbooks as you go, and help make the platform easier for others to operate and support.
- Strong English communication. Documentation, postmortems, Jira updates, Slack discussions, and cross-team collaboration are conducted in English.
Benefits
What's in it for you?
- A focus on professional development.
- Interesting and challenging projects.
- Fully remote work with flexible working hours, that allows you to schedule your day and work from any location worldwide.
- Paid 24 days of vacation per year, 10 days of national holidays, and unlimited sick leaves.
- Compensation for private medical insurance.
- Co-working and gym/sports reimbursement.
- Budget for education.
- The opportunity to receive a reward for the most innovative idea that the company can patent.
By applying for this position, you consent to the processing of your personal data as described in our Privacy Policy, which provides detailed information on how we maintain and handle your data.
Apply for this job
Lead Data Engineer
We're looking for a Lead Data Engineer (Azure, DWH) to join a long-term project in the insurance domain. You will take ownership of a strategic initiative โ building a modern cloud-based data warehouse that will replace legacy reporting and month-end processes. This role combines hands-on engineering, technical leadership, and direct collaboration with the client.
Major responsibilities:
- Act as a primary technical point of contact for the client, clarifying requirements and translating business goals into technical solutions
- Own and drive the DWH delivery plan, including prioritization, planning, and progress tracking
- Identify risks and dependencies (data, legacy systems, delivery) and proactively propose mitigation strategies
- Perform code reviews and ensure high quality of SQL, PySpark, and data models across the team, ensure consistent quality and architectural alignment
- Align technical solutions with stakeholders and present trade-offs and recommendations
- Distribute tasks within the team and maintain a sustainable development pace.
We'd love to hear from you if you have:
- 5+ years of experience in Data Engineering, including 2+ years in Azure
- Expert-level SQL skills (complex analytical queries, performance optimization)
- Strong experience with Azure Synapse Analytics, PySpark
- Hands-on experience with Azure Data Lake Storage Gen2 and data layer design (Raw / Silver / Gold)
- Experience with Azure SQL Managed Instance
- Strong knowledge of data modeling (Kimball / Inmon, SCD, historical data handling)
- Solid experience designing and building enterprise Data Warehouses (fact/dimension modeling, aggregation layers)
- Experience in technical leadership (code reviews, architecture decisions, mentoring)
- Experience working in Scrum teams and managing delivery from high-level requirements
- English level B2+ (regular communication with the client).
Nice to have:
- Experience with Python and Azure Data Factory
- Background in insurance domain (premiums, commissions, taxes, etc.)
- Familiarity with BI tools (e.g., Power BI)
- Experience with lift-and-shift migration from legacy systems
- Azure certifications (e.g., DP-203).
Senior ML Engineer
What youโll enjoy at PravoTech
- Current tech stack: ReactJS, Docker, .Net
- Develop useful IT products
- Flexible processes
- Strong team with full product responsibilities
- Everything official: employment under Russian labor law, white salary, vacations, sick leave, and financial assistance for important events
- Care for people: DMS with dentistry, corporate trainer, meals or English lessons
What we expect from you at PravoTech
Your tasks- Develop and implement ML-based solutions
- Create databases and prototypes for effective ML models and algorithms in software products
- Programming languages: Python Advanced (required), C++ (desirable), Java/Scala/Clojure or others (welcome)
- Linux knowledge (Ubuntu, CentOS) at admin level, shell and bash proficiency
- SQL knowledge, database administration and development: ClickHouse, PostgreSQL or Oracle
- Experience developing, validating and testing ML models, implementing ML algorithms on CPU and GPU
- Mathematical toolkit applied to ML
- Remote work
- From 3 years of experience
- Salary from 300,000 โฝ

WHAT YOUโLL DO
๐ค Architecture & Strategy
- Define and maintain the architecture strategy for all Python-based systems.
- Lead the migration of product components into standalone Python services.
- Evaluate and guide the adoption of modern frameworks, tools, and cloud solutions.
- Identify technical risks, scalability challenges, and improvement opportunities and propose actionable solutions.
Technical Excellence
- Design and evolve distributed, event-driven, and high-load systems in Python.
- Establish and enforce coding standards, CI/CD best practices, and testing automation.
- Collaborate with DevOps and Infrastructure to enhance observability, reliability, and deployment pipelines.
- Optimize systems for performance, resilience, and maintainability across multiple environments.
Collaboration & Mentorship
- Partner closely with AI/ML, Data Engineering, and Infrastructure teams to ensure architectural alignment.
- Mentor engineers, lead architecture reviews, workshops, and knowledge-sharing sessions.
- Communicate complex technical concepts clearly to both engineering teams and business stakeholders.
TO SHINE IN THIS ROLE
- 7+ years of professional experience with Python in production-grade, large-scale systems.
- Proven experience designing and operating microservices or distributed architectures.
- Deep understanding of asynchronous programming, concurrency, and Python performance optimization.
- Hands-on experience with CI/CD pipelines, automated testing (pytest, unittest), and monitoring tools.
- Strong knowledge of REST/gRPC APIs, message brokers (RabbitMQ, Kafka), and databases (PostgreSQL, Redis, MongoDB).
- Proficiency with cloud platforms (AWS, GCP, or Azure) and container orchestration (Docker, Kubernetes).
- Solid grasp of system design principles, scalability strategies, and performance optimization.
- Excellent communication skills with a collaborative, solution-oriented mindset.
- Experience mentoring engineers and establishing engineering best practices.
- Experience working with AI or Data-intensive services is a strong plus.
WHAT WE OFFER
๐ค We care deeply about your growth, well-being, and comfort:
- ๐ Hybrid onboarding to start work remotely and relocation support for you and your family.
- ๐ Comprehensive health insurance for both you and your family.
- ๐ Professional development budget for conference tickets, online courses, and other relevant resources to help you grow.
- ๐ซถ Flexible benefits package to tailor perks that matters most for you.
- ๐ชด Hybrid work and generous leave options to prioritize your work-life balance.
- ๐ฝ๏ธ In-office perks, including free meals and snacks.
- ๐ค Company-funded sport activities, annual offsites and team-building events.

Join Skyro as Senior / Middle System Analyst (Broker / Loans)
Who are we?
We make a financial product that is already changing the Philippine financial market. More than three years we are going live, our number of customers is growing and our financial results are getting better and better.
What does the team do?
The main services supported by the team are the loan application process for all the company's products from the first touch of the applicant to the activation of the financial product. We also manage operational services for managing merchant partners, agent network and their daily schedules.
Who are we looking for?
An experienced systems analyst to help make new software solutions to improve customer experience.
What will you do?
- Assist in gathering and analysing business requirements to understand our customers' needs and translate them into actionable solutions
- Coordinate both within the team and between teams for large cross-team projects for consistency of solutions
- Work on designing the technical details of the solution together with the software engineering and QA team
- Write technical specifications and other necessary documentation for our projects
Technologies and technology products on the team
Kotlin, PostgreSQL, React, Camunda 7, gRPC, HTTP REST API, Kafka, AWS, k8s, Grafana, Snowflake
Language Skills
- Fluent Russian is required for daily team communication.
- English Level B2 or higher to collaborate with international colleagues and review documentation.
Soft Skills
- Self-driven and capable of making informed decisions autonomously.
- Strong communication skills to collaborate effectively across teams.
- Experience with Kanban or other agile methodologies is a plus.
Why Join Skyro
- At Skyro, we offer a unique opportunity to combine impactful work with a supportive and dynamic environment.
- Work From Anywhere: no location constraints, salaries in USD, and a global mindset.
- Healthcare Support: partial reimbursement of medical expenses to ensure your well-being.
- Generous Leave Policy: 31 calendar days of paid vacation per year to ensure a healthy work-life balance.
- Professional Growth: compensation for professional courses or conferences to support your career development.
- Language Learning: access to corporate group English classes to improve your communication skills.
- Annual Performance Bonus: rewarding your contributions with a yearly bonus.
- Corporate Event Travel: full coverage of airfare to attend corporate events in Manila every December.
What happens after you apply?
We review applications on a rolling basis and aim to get back within 2โ3 business days. If thereโs a fit, weโll reach out. If you donโt hear from us within 2โ3 weeks โ consider it a pass. Thanks for taking the time โ we appreciate your interest.
Collaboration Notice
As we build our business in the Philippines, please note that the workday should start no later than 2 PM (GMT+8)/7 AM (CET) to ensure effective collaboration within our international team.

System Engineer โ Senior
Overview:
SOFTSWISS continues to expand the team and is looking for an experienced System Engineer. We need an accomplished professional who shares our culture and values.
Purpose of the Role
Youโll work on infrastructure and CI/CD systems, driving stability, automation, and scalability for our platforms and services.
Key Responsibilities:
- Setup and maintain (updates, problem fixes) infrastructure for production/staging environments
- Improve infrastructure setup and maintenance processes
- Communication and collaboration with product development team and stakeholders
- React to monitoring events (periodical day/night duties)
- Support and development of GitLab CI pipelines
- Participate in the design of complex IT systems
Tech Stack:
- Saltstack
- Clickhouse
- Kafka
- Kubernetes
- Gitlab
- Vault
- Postgresql (+Patroni)
- Redis
- Terraform/Pulumi
- ELK
- Zabbix/ Prometheus + Grafana
Required Experience:
- 3+ years of experience as a system engineer or SRE/DevOps
- Good understanding of Linux-like operating systems (Administration and troubleshooting)
- Experience with configuration management systems (Ansible / Saltstack / Terraform / Pulumi)
- Experience with Clickhouse clusters administration and management
- Experience with Kubernetes+Helm
- Experience with monitoring systems (Prometheus stack/Zabbix)
- Problem-solving skills
- Higher technical education
- Intermediate or higher English and Russian (B1+)
Nice to have:
- Experience with management of Tableau and DataHub services
- Experience with distributed systems
- Experience with Bash/Python/Go languages
- Experience with log aggregating systems (ELK/EFK stack)
- Experience with cloud provider services
- Experience with PostgreSQL clusters (self-hosted)
- Experience with domain management (CloudFlare/Route53)
- Experience with workflow in Agile like framework (Kanban/Scrum)
- Demonstrated ability to leverage AI-powered tools and platforms to optimize workflows and support decision-making
Main Advantages
- Private insurance (depending on contract type)
- Paid gym membership
- Comprehensive Mental Health Program
- Free English lessons (online)
- Local language courses
- Paid time off (PTO)
- Maternity leave support
- Referral program rewards
- Upskilling, internal workshops, and participation in professional conferences and corporate events

Smart Contract QA Engineer (Oracle)
What you'll do
- Design and implement automated testing frameworks for oracle smart contracts, covering unit tests, integration tests, and end-to-end tests.
- Develop and execute security test cases, focusing on core scenarios such as price data feeds, off-chain data retrieval, multi-party consensus mechanisms, and resistance to Sybil attacks.
- Simulate various on-chain and network abnormal conditions (e.g., high Gas fees, network latency, node failures) to conduct stress testing and fault tolerance testing.
- Work closely with the development team to perform vulnerability scanning and assist in code audits before contract deployment, ensuring no critical security risks.
- Create and maintain clear test documentation, defect reports, and quality assessment reports.
- Participate in verifying the accuracy of oracle node data and conducting performance benchmarking.
- Continuously follow blockchain testing tools and best practices, and introduce new testing methodologies to enhance efficiency.
Requirements
- Bachelorโs degree or higher in Computer Science, Software Engineering, or a related field.
- 3+ years of experience in smart contract testing or development, with proficiency in Solidity and mainstream testing frameworks (e.g., Hardhat, Truffle, Foundry).
- Deep understanding of oracle mechanisms (e.g., Chainlink, Band Protocol) and awareness of common attack vectors (e.g., flash loan attacks, data tampering).
- Familiar with fundamental blockchain concepts (consensus mechanisms, Gas optimization, event logs, etc.) and tools (e.g., Web3.js, Ethers.js).
- Capable of developing automated testing scripts using JavaScript/TypeScript, Python, or similar languages.
- Experience in security testing or code auditing is preferred, with knowledge of common vulnerabilities (e.g., reentrancy, integer overflow) and mitigation methods.
- Strong communication skills and a collaborative mindset, adaptable to agile development environments.
Preferred Qualifications
- Hands-on experience in testing or developing oracle projects, with familiarity in decentralized data sources and node networks.
- Knowledge of zero-knowledge proofs, TEE (Trusted Execution Environment), and other privacy-related oracle technologies.
- Experience with performance testing tools (e.g., K6, Gatling) or on-chain monitoring tool development.
- Open-source contributions to blockchain projects on GitHub or demonstrable testing case portfolios.

Job description
As an Engineering Manager for Composition Analysis, you'll lead a team building the software composition analysis capabilities that help GitLab customers find and fix vulnerabilities in their application dependencies and software supply chain. You'll guide engineers working on software composition analysis and container scanning, and you'll be responsible for setting priorities, shaping product architecture, and running agile processes so that our security offerings stay effective, reliable, and easy to use in real DevSecOps environments. You'll balance complex, security-focused roadmaps and author project plans so that customers get a robust composition analysis experience within GitLab. In your first year, you'll drive key initiatives like auto-remediation of vulnerable packages and auto-fix breaking changes with AI, scanning unmanaged C/C++ dependencies, static reachability analysis, malicious package detection, and snippet detection for open source dependencies.
Some examples of our projects:
- Building hyper-scale vulnerability detection engines for millions of GitLab users around the world
- Designing auto-remediation workflows for vulnerable open source and third-party dependencies
- Auto AI fixes for breaking changes that happen following dependency bumps
What youโll do
- Lead engineers across the Composition Analysis team, setting clear priorities and expectations.
- Drive key security initiatives, including auto-remediation of vulnerable software packages, scanning unmanaged C/C++ dependencies, static reachability analysis, and snippet detection for open source dependencies.
- Balance priorities and resources across the Composition Analysis team to ensure sustainable delivery and high-quality outcomes.
- Author and maintain project plans for epics within the Composition Analysis team, aligning work, identifying dependencies, and ensuring quality delivery.
- Run agile project management processes for the Composition Analysis team, including planning, estimation, and continuous improvement of delivery practices.
- Provide guidance on the architecture of software composition analysis solutions, ensuring they are robust, scalable, and effective.
- Collaborate closely with the Composition Analysis team to ensure consistent, high-quality approaches to application security across GitLab's platform.
Who you are
- Background leading multiple technical teams or groups, ideally in application security or cloud security
- Practical understanding of software composition analysis, including how to assess and manage risks in application dependencies
- Familiarity with containerization technologies, package managers, and dependency management systems
- Experience working with or around open source security tooling (for example, Syft, Grype, Trivy, or similar tools)
- Ability to plan and run agile project management processes for the Composition Analysis team, including coordinating priorities and dependencies.
- Skill in guiding product and architecture decisions for security scanning tools, balancing technical constraints with customer needs
- Openness to candidates with transferable experience in security engineering, DevSecOps, or vulnerability management who are motivated to grow in application security leadership
Team description
The Composition Analysis team at GitLab sits within our security product area and focuses on building and improving our software composition analysis capabilities across the DevSecOps platform. We own core features such as software composition analysis, container scanning, and related remediation workflows. You'll lead our distributed group of security-focused engineers as we collaborate asynchronously across time zones using GitLab itself for planning, code review, and delivery. Right now, we're focused on advancing capabilities like auto-remediation of vulnerable packages, scanning unmanaged C/C++ dependencies, static reachability analysis at the function level, and snippet detection for open source dependencies.
Level 3 IT Support Engineer
Platform Operations - Limassol, limassol
Duties and opportunities
- Assess issues and provide solutions for incidents and problems that cannot be handled by tier 1,2
- Managing incidentsโ life cycle until they are fully resolved or providing a workaround solution, escalation to 4 level support where required
- Periodically perform analysis to see if new problems need to be registered
- Coordinate root cause analysis
- Support hot-fix deployment process
- Perform log level analysis
- Take end-to-end ownership of customer technical issues, including initial troubleshooting, identification of root cause, issue resolution, and communication
- Qualification/replication of the reported issue in an appropriate customer environment
- Information gathering to ensure complete availability of details required for root cause analysis
- Provision of technical resolution or problem workaround
Requirements
- 2+ years of experience in IT
- Strong HW/SW analytic, problem-solving and troubleshooting skills
- Deep knowledge of HTML/CCS, HTTP(s)
- Understanding of client-server architecture
- Experience in bash/shell programming (nice to have)
- Experience in SQL/KQL/JQL querying and managing data
- Strong Debugging skills
- Experience working with logging, monitoring, and alerting tools (e.g. ELK stack, Grafana, PagerDuty)
- Ability to perform log level analysis
- Structured and process-oriented
- Self-learning ability, self-motivated and team player
- Ambition to learn new systems, procedures, techniques in a short period of time
- Experience with bug and issue tracking system (Jira preferred)
- Ability to problem solve independently and multi-task
- Understand of ITIL methods
- Understand of Systems development life cycle
- Pro-active, resourceful with high level of accuracy and attention to detail
- Ability to meet strict deadlines and manage stress effectively
- Strong communication and reporting skills
- Experience in gambling/betting (nice to have)
What you'll do
- Assemble and resolve incidents that cannot be managed by lower tiers
- End-to-end ownership of technical customer issues from troubleshooting to resolution
- Coordinate root cause analysis and implement solutions or workarounds
- Support deployment of hot fixes and perform log analysis
Who you are
- Self-motivated, proactive, detail-oriented, and a strong communicator
- Team player with the ability to work under pressure and meet deadlines
- Willing to learn new systems and procedures quickly
Tech stack
- HTML, CSS, HTTP(S)
- Bash/shell scripting
- SQL/KQL/JQL
- Logging/monitoring tools (ELK stack, Grafana, PagerDuty)
Benefits, perks
- Learning and development opportunities and challenging tasks
- Official employment in accordance with Cyprus and EU laws, family member registration
- Relocation package (tickets, hotel for 2 weeks)
- Office fitness corner
- Language skills development and partial cost coverage for language classes
- Birthday celebration present
- 24 working days of annual vacation
- Breakfasts and lunches in the office (partially paid by the company)