People Matter

Data Solutions - Architect

Ayasdi

Ayasdi

IT
India · Bengaluru, Karnataka, India
Posted on Mar 13, 2026
Introduction

Data Solutions Architect


Job Description

IRIS Foundry Data Platform Overview Role Overview We are looking for an Architect who has prior experience of handling Data Platform infrastructure and products to architect, implement and maintain data pipelines for di6erent datasets supported by IRIS Foundry. This role works closely with products and customer engagements to deliver fully functional data pipelines with 99.9% uptime, engineering team to incorporate latest available product features into pipeline templatization, raise alerts and troubleshoot functional issues while avoiding data ingestion downtime. This role is expected to be independent and lead a small team of senior, mid-senior or junior engineers to realise the customer delivery roadmap


About Us

Key Responsibilities

1.Collect data requirements from Professional Services and negotiate the data acquisition strategy with customers to bring in di6erent datasets timeseries, tabular, document stores.

2.Understand the capabilities of Foundry Data Platform and update the set of supported data pipeline templates, based on newly added features.

3.Communicate with customer engagements team to understand the upcoming customer deliverables, PoCs and prepare timelines for delivery.

4. Attend the customer meetings and understand the architecture of customer ecosystem to provide available options for pipelines, ask the right questions and collect information for us to make the right decision of pipeline template.

5. Understand the dataset completely and hence incorporate custom clean up and preprocessing steps in the pipeline.

6. Propose monitoring and alerting rules for all the data pipelines, implement custom rules based on the use cases and update them with changing environments.

7. Drive the conversations when any changes are expected from the customer side and also plan the downstream modifications to avoid data disruptions.

8. Establish best practices to write and automate component tests for data pipelines

Expected Outcomes

1. Improve reliability of data ingestion to 99.9% uptime

2. Reduce data disruptions by improving monitoring and alerting for data pipelines

3. Configure custom dashboards per tenant in multi-tenancy cloud environments.

4. Proactively identify bottlenecks and suggest items for product roadmap.

Required Qualifications

1. Understanding of cloud native services of Azure, AWS, GCP

2. Understanding of SQL databases like Postgres, no SQL databases like Mongo, Elasticsearch, messaging systems like Azure queues, event hubs, etc.

3. Hands-on coding and debugging skills with Python, Groovy, Java is a must

4. Prior experience with run time containerization using Docker, Kubernetes and CI/CD pipelines

5. Experience in Apache NiFi, Apache Airflow or any comparable workflow management tools

6. Knowledge of Prometheus, Grafana, Kibana tools is preferred

7. Candidate should be comfortable using GenAI tools like Cursor as an e6ective assistant to rapidly build, test and automate deployment of data pipelines