Remote, but you must be in the following locations
Audigent is the leading curation, data activation and identity platform. Audigent’s pioneering data platform unlocks the power of privacy-safe, first party data to maximize addressability and monetization of media at scale without using cookies. As one of the industry’s first data curation platforms powered by its unique identity suite (Hadron ID™), Audigent is transforming the programmatic landscape with its innovative SmartPMP™, ContextualPMP™ and CognitivePMP™ products, which use artificial intelligence and machine learning to package and optimize consumer-safe data with premium inventory supply at scale. Providing value and performance for the world’s largest brands and global media agencies across 100,000+ campaigns each month, Audigent’s verified, opt-in data drives monetization for premium publisher and data partners that include: Condé Nast, TransUnion, Warner Music Group, Penske Media, a360 Media, Fandom and many others. For more information, visit www.audigent.com.
Audigent’s data team maintains the core of our data ingestion infrastructure and data lake, provides tools and sets best practices for our entire business when it comes to maintaining and using our data lake.
Audigent is looking for a collaborative, curious and hands-on software engineer to help improve the data infrastructure and delivery processes for our SmartPMP platform and our core data team. We expect you to have a good command of at least one programming language (ideally, Python), know data structures, software development techniques and data modelling.
You’ll be an important part of our team, working closely with senior engineers in the Data Team as well as engineers from other teams. Your work will be spread among maintaining the legacy codebase or contribute to new bespoke products and features.
Sustaining operations of a working, profitable adtech stack
Implementing automation, optimization and analysis processes across our organization
Collaborating with lead and senior engineers
Testing, debugging and delivering working solutions
Use appropriate technology to enable access to external data sources
Create, document and maintain complex data pipelines
Data preparation for storage, analytics, ML
Data integration, transformation, and optimization
Building of data-centric, optimized APIs
Experience with: Python, SQL, Pandas, AWS, Athena (Presto) or similar
Experience with big data (Hadoop, Spark, Kinesis or similar) and ETL data pipeline technologies like Airflow or AWS Glue.
Experience with microservices, rest API and similar
Ability to collaborate on projects and work independently when required
Strong analytical skills
Track record of delivering working end-to-end solutions
Must be a conscientious worker who wants to have real impact and is comfortable in a fast-paced start-up environment
Excellent communication skills, in particular when discussing technical concepts
Experience delivering production data within AdTech
Demonstrable knowledge of adtech ecosystem and/or data flows within the AdTech, specifically including DMPs, SSPs, DSPs, digital campaign metrics, 1st & 3rd Party Identity, and non-cookie-based identity systems.
Django or Flask experience
Experience with a modern compiled language (Rust, Go, etc…)
DevOps (Docker, Terraform) experience
Experience with additional cloud providers (GCP, Azure)
Compensation will be based on skills and experience