Want to help us bring our speed and efficiency to big-data analytic on the cloud?
Firebolt delivers extreme speed and elasticity at any scale solving your impossible data challenges.
Firebolt was built with three principles in-mide
- Firebolt uniquely combines a wide range of technologies to deliver query performance that is unprecedented at terabyte and petabyte scales:
- Columnar data structure for faster analytics workloads
- Vectorized processing and SIMD utilization for massive throughput at the CPU level
- Just-in-time query compilation using LLVM for hardware optimized query plans ● Continuously aggregated indexing for whopping data ingestion speeds and near-instant data updates
Firebolt was built from the ground up with huge datasets in mind. Our unique technology combines the best of high-performance database architecture with the infinite scale of the data lake, guaranteeing unparalleled performance at any scale. Clusters of compute nodes use Massive Parallel Processing (MPP) to parallelize queries across nodes, through which fast performance can be maintained as data grows.
Firebolt is built on a decoupled storage & compute architecture, with native support for quick up/down scaling of compute resources per warehouse, database and query. Such granular control over resources and elasticity allows you to easily assign as many resources as needed only when you really need them, while avoiding overpaying for unused resources.
About our Tech stack:
- Firebolt is composed of several open-source projects and relies on a unique IP that boosts data analytics and enables full scalability and decauling compute from storage.
- Our SQL core teams work with C++.
- Our backend teams work with Go, Python, Rust in order to create microservices exposing REST APIs and GraphQL interface.
- We are using both CockroachDB and FoundationDB as application data storage. Our frontend teams work with TypeScript, React, Redux + Apollo.
- CI/CD is handled by a combination of CircleCI and CodeDeploy to test and deploy code to production.
- The infrastructure is managed as code with Terraform and services are monitored using Prometheus and Grafana.
About the job to be done:
- Take a key part in our R&D team.
- Take part in the definition of our R&D quality standards.
- Design, build, and maintain an end-to-end QA & automation process of a global high-scale SaaS product.
- BS/Master degree in Computer Science, Engineering, or a related field.
- +3 years experience in developing automation frameworks.
- Experience in automating production systems with Python.
- Experience with concurrency tools on SaaS products.
- Experience building test automation frameworks for software products deployed on AWS (or one of the other leading cloud providers: Azure/GCP).
- Experience working with modern software lifecycle tools: Git, CI/CD
- Performing code reviews and mentoring team members on automation concepts and best practices.
- Good knowledge of Linux.
- Experience with data-warehousing (Redshift, Snowflake, Athena, etc.) - Big Plus!
What we offer you:
- An opportunity to make an impact on the industrial future and be part of disruptive and groundbreaking product.
- In-depth exposure to a modern cloud-scale distributed data warehouse.
- Competitive salary and benefits (including pension plans, insurance, benefits and more).
- IT equipment and tools to allow you to be productive.