People Matter

Solutions Engineer — AI & Data Science Specialist

Volterra

Volterra

Software Engineering, Data Science
Dublin, Ireland
Posted on Jan 28, 2026

At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation.

Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive.

Role Overview

F5 is expanding its AI Center of Excellence and is hiring a Specialist Solutions Engineer with deep expertise in AI, Data Science, and LLM behavior to support our AI Runtime Security portfolio.

This is a highly specialized SE role designed to fill a critical gap between customer-facing solution engineering and internal data science. The primary focus of this role is to interpret, analyze, and explain AI security testing results—particularly outcomes from Proof of Concepts (POCs), red-teaming exercises, and runtime guardrail evaluations.

You will act as the AI/ML subject-matter expert within the Solutions Engineering organization, helping customers and internal teams understand:

  • Why scanners trigger (or don’t)
  • The tradeoffs between false positives and false negatives
  • Model behavior under adversarial or ambiguous inputs
  • How tuning, thresholds, and policy design impact real-world outcomes

Think of this role as a hybrid between a Solutions Engineer, Prompt Engineer, and Applied AI Analyst, deeply technical, customer-facing, and outcome-oriented.

What You’ll Do

AI & Data Science Specialization

  • Analyze and interpret results from AI Runtime Security POCs, including red-team campaigns, prompt/response scans, and inference-layer inspections.
  • Diagnose false positives and false negatives, explaining root causes in clear, customer-friendly language.
  • Help define acceptable risk thresholds and success criteria for enterprise AI security deployments.
  • Partner with customers to refine prompts, policies, scanner descriptions, and evaluation strategies.
  • Act as the escalation point for complex AI behavior questions during evaluations and pilots.

Customer & GTM Enablement

  • Partner with Account Executives and core Solutions Engineers during late-stage evaluations and technical deep dives.
  • Support customer workshops focused on AI testing methodology, evaluation frameworks, and AI risk interpretation.
  • Translate model behavior and statistical outcomes into business-relevant narratives (risk, compliance, trust, readiness).
  • Assist in shaping POC readouts, executive summaries, and customer-facing reports.

Internal Collaboration & Enablement

  • Serve as the bridge between Solutions Engineering, Product, and Data Science when interpreting scanner performance and model behavior.
  • Help define internal best practices for:
    • FP/FN analysis
    • Evaluation datasets
    • Prompt and policy tuning
    • Scanner validation strategies
  • Create internal guidance, playbooks, and examples to raise the overall AI literacy of the SE team.
  • Provide feedback to Product and Engineering based on real-world customer testing patterns.

What You Bring

Required

  • Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, AI, or a related technical field.
  • 5+ years of experience in a technical, customer-facing role (Solutions Engineer, ML Engineer, Data Scientist, Applied AI Engineer, or similar).
  • Strong understanding of:
    • Large Language Models (LLMs)
    • Prompt engineering and prompt evaluation
    • Model behavior, bias, and limitations
    • False positive / false negative tradeoffs in ML systems
  • Experience analyzing model outputs, classification results, or evaluation metrics.
  • Ability to explain complex AI/ML concepts clearly to non-data-scientists.

Strongly Preferred

  • Hands-on experience with prompt engineering, LLM evaluation, or model testing.
  • Familiarity with AI security concepts such as:
    • Prompt injection
    • Jailbreaks
    • Data leakage
    • Model misuse and abuse patterns
  • Experience working with real customer datasets or evaluation pipelines.
  • Comfort working with Python, notebooks, or lightweight analysis tooling (even if not production-focused).

Ideal Candidate Profile

  • You enjoy explaining why models behave the way they do.
  • You’re comfortable living in the gray areas of AI—not everything is deterministic, and that excites you.
  • You can balance statistical rigor with practical, customer-facing recommendations.
  • You’re energized by helping teams and customers make informed decisions, not just “pass/fail” judgments.
  • You thrive in fast-moving environments where AI technology, security threats, and customer expectations are evolving rapidly.

Why This Role Matters

As AI adoption accelerates, customers are no longer just asking “Does it work?”—they’re asking:

  • Is this safe?
  • Can we trust the results?
  • Are these false positives acceptable?
  • What risk remains if we loosen or tighten controls?

This role exists to answer those questions with confidence, clarity, and credibility.

Why Join F5’s AI Center of Excellence

You’ll be part of a small, high-impact team shaping how enterprises evaluate, trust, and secure AI systems at runtime. You’ll work on some of the most complex and interesting AI security problems in the market—while having real influence on customers, products, and strategy.

If you’re passionate about AI behavior, evaluation, and real-world impact—and want to apply that expertise in a customer-facing role—we’d love to talk.

The Job Description is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change.

Please note that F5 only contacts candidates through F5 email address (ending with @f5.com) or auto email notification from Workday (ending with f5.com or @myworkday.com).

Equal Employment Opportunity

It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination. F5 offers a variety of reasonable accommodations for candidates. Requesting an accommodation is completely voluntary. F5 will assess the need for accommodations in the application process separately from those that may be needed to perform the job. Request by contacting accommodations@f5.com.