Software delivery is still stuck in slow motion. Backlogs grow, cycle times are measured in weeks, and “code review” too often means rubber-stamping to hit a sprint line. Most teams are drowning in integration risk, brittle architectures, and unclear requirements. Meanwhile, AI coding agents are shipping credible code, but only when they’re fed precise specs and supervised with rigor. The industry keeps throwing more humans at the keyboard instead of fixing the system that turns intent into reliable, production-grade software.
Trilogy is taking the opposite path. We run an agentic development workflow end-to-end: engineers drive research, specification, architecture, and quality, while AI agents handle implementation. Specs are written to be executable by Claude Code, test-first and surgically scoped to minimize blast radius. The result is continuous delivery of production features with dramatically tighter feedback loops, fewer surprises in integration, and higher confidence in correctness before a single line lands on main.
This job is for the engineer who wants to design systems and verify correctness, not grind tickets. You’ll spend roughly 10–20% in research and technical discovery to understand existing behavior, constraints, and stakeholder needs; 40–50% converting that understanding into precise, test-driven, agent-readable specifications and architectural decisions; and 40–50% orchestrating and supervising implementation. You are accountable for what ships: no silent failures, no ambiguous specs, no “the agent did it.”
You’ll join a team that treats specification as a product, review as an engineering discipline, and delivery as the only KPI that counts. If you’re energized by elegant system design, surgical scoping, and ruthless validation—and you already use tools like Claude Code, Cursor, Copilot, or ChatGPT as part of your daily workflow—you’ll thrive here. Bring your architecture chops, your TDD instincts, and your bar-raising taste in code quality. If this sounds like you, step in and lead the future of how software gets built. Apply and show us how you orchestrate agents to deliver production-grade outcomes at scale.
Crossover's skill assessment process combines innovative AI power with decades of human research, to take the guesswork, human bias, and pointless filters out of recruiting high-performing teams.






It’s super hard to qualify—extreme quality standards ensure every single team member is at the top of their game.
Over 50% of new hires double or triple their previous pay. Why? Because that’s what the best person in the world is worth.
We don’t care where you went to school, what color your hair is, or whether we can pronounce your name. Just prove you’ve got the skills.
Join the world's largest community of AI-first remote workers.