Inside the Hardest Hiring Process I Ever Passed (+Failed)
How to Crossover

Inside the Hardest Hiring Process I Ever Passed (+Failed)

Inside the Hardest Hiring Process I Ever Passed (+Failed)
Contents
  • The Hiring Process
  • The Onboarding Process

What if your best career experience was the one that didn’t work out? Alex Cheng passed the tests, got the offer, and joined Crossover. Then he failed onboarding. But instead of blame, he walked away with clarity - and a new benchmark for what real engineering culture looks like in the Ai-first era.

This is Alex’s personal account of his experience with the Crossover hiring and onboarding process. From initial skepticism to full transparency, from offer to unexpected exit - Alex shares what it’s really like to go through one of the tech industry’s most rigorous, no-nonsense pipelines.

The Hiring Process

Entering with skepticism

I joined Crossover as an AI-Augmented Full-Stack Principal Engineer after a hiring process that was more rigorous - and more transparent - than I initially expected.

Before applying, I’d read several online testimonials from candidates and former employees who were clearly frustrated. That made me cautious. Still, I wanted to experience the process firsthand and form my own judgment.

The hiring pipeline was explicitly multi-stage, and each step was clearly motivated and explained.

The First Fit Filter

It began with a fit and experience interview conducted asynchronously via an AI-guided questionnaire.

This wasn’t résumé screening.

The questions were designed to surface concrete evidence of past decisions, trade-offs, and outcomes. The intent was clear: understand real experience, not polished narratives. The evaluation criteria were visible, and the process felt consistent and fair.

Alex Cheng ccat pass.

Passing The Impossible Test

Next came a cognitive aptitude test - timed, but unproctored.

The stated goal was to assess problem-solving ability, learning speed, and critical thinking - attributes research consistently links to adaptability and performance.

I appreciated that the rationale for prioritizing this signal over educational pedigree was stated plainly.

Real-World Testing in Action

Then came the technical challenge, focused on AI-augmented development.

The emphasis wasn’t simply on implementing a feature, but on how one guides an AI within an existing codebase while preserving architectural quality. The framing felt grounded in reality: AI accelerates execution, but judgment, scope control, and accountability remain human responsibilities.

Interview Depth Without Box Checking

After that, I had a live technical interview with Fernando.

We walked through the scenario and required changes together. From the beginning, his questions were precise and legitimate. As I responded, he probed deeper - into reasoning, trade-offs, architectural implications.

It quickly became clear this was not a superficial box-checking exercise, but an interview designed to test depth, technical economy, and elegance of thought.

The Video That Didn’t Sell

Ahead of that interview, Fernando asked me to watch a video prepared by Serban. That video left a strong impression on me.

From the opening seconds, Serban was radically transparent about who he was, how he arrived in his role, and how his engineering organization actually functions.

He spoke candidly about small teams, high ownership, the absence of separate QA or Ops, explicit performance metrics, AI as a force multiplier, and a culture of direct feedback and accountability.

He also addressed common concerns directly - job stability, compensation, time tracking, expectations, and growth paths - without softening the edges.

What stood out was not just the content, but the intent. The video didn’t try to sell the role - it tried to reveal it. It functioned as a filter: this is how we work - if it resonates, continue, if it doesn’t, better to know now.

Watching it shifted the tone of the process from discovery to alignment.

Verification Consistency in the Process

Later in the pipeline, I was asked to take the same cognitive aptitude test again, this time proctored.

The expectation wasn’t perfection, but consistency - roughly the same performance range as the unproctored attempt.

Once again, the reasoning was explicit: cognitive testing, when used carefully, has strong empirical backing as a predictor of learning potential and adaptability.

An offer followed.

What I Was Opting Into

Across the entire hiring process, what stood out was not just the difficulty, but the coherence. The bar was high, but visible. Expectations were stated upfront.

And there was a clear effort to ensure that candidates understood exactly what they were opting into before committing.

Transparency is a word I use often in this account - but here, it feels earned.

The Onboarding Process

How Can We Help You Succeed

From my first onboarding interactions, I received a strong and consistent signal that the people involved genuinely wanted me to succeed.

That distinction matters.

The professionalism I encountered went beyond politeness or formulaic encouragement. The support spanned both technical and operational contexts.

Whether it was tooling, process, or navigating how things worked day to day, Serban was consistently responsive, patient, and quick to unblock issues. His posture - like others on the team - was not merely ‘available,’ but actively helpful.

In my first meeting with Sergio and Serban, I was explicitly encouraged to ask questions whenever something was unclear.

The message was simple and repeated: their role was to help unblock me.

Ambiguity was not treated as a test of toughness; clarity was treated as a shared responsibility. Across individuals and contexts, the message was consistent: tell us how we can help you succeed.

Alex Cheng onboarding fail.

Blunt But Not Personal

The feedback I received was direct and unsugarcoated, including clear signals that I was not progressing as expected. Yet it never felt personal or adversarial. Being blunt without being dehumanizing is rare, and it was maintained consistently.

What stood out most was the team’s uncompromising commitment to quality, focus, and architectural discipline.

Correctness alone was never enough. Changes had to justify themselves in scope, impact, consistency, and cost.

The GraphQL Lesson

A concrete example came from a GraphQL change I proposed during onboarding.

I implemented a field resolver that allowed a Product to expose its Category as a nested object. The resolver narrowed event.source using a type guard, extracted categoryId, and fetched the corresponding category via the service layer - allowing clients to query products and categories in a single request.

Technically, it worked. My intent was to improve - excuse the naïveté - client ergonomics and reduce round-trips.

Danish challenged it directly.

His feedback was explicit: Too complicated for not much ROI (that’s the whole reason we have a separate endpoint for category).”

The issue wasn’t correctness. It was architectural discipline. The change introduced additional resolver logic, nullability tension, and potential N+1 risk for marginal benefit. The system already exposed categories through a dedicated endpoint. The added generality did not justify its cost within scope.

That exchange clarified expectations more effectively than any written guideline could: even technically valid improvements are rejected if they don’t earn their keep.

When Good Faith Isn’t Enough

In the end, I did not meet expectations for the role.

I failed onboarding.

This happened in good faith, within a context of open communication and proactive support. The outcome was disappointing - particularly because I would have genuinely enjoyed continuing to learn from and work with this team.

The rapport was there. The personality fit was there. But I was not yet operating at the level required. In this environment, quality and standards supersede everything else.

It may sound counterintuitive, but failing for that reason increased my respect for the team and their commitment to their principles.

Standards Over Sentiment

In my final conversation with Serban, we had a candid discussion that felt more like coffee with a mentor than a formal exit meeting. There was no sugarcoating, but there was also no harshness - just honesty, empathy, and mutual respect.

At one point, he mentioned that perhaps we could try again in the future, if and when I fit the role. I took that not as something to anchor my plans to - the future is uncertain for everyone, especially if some future AGI tires of somewhat pleasantly evolved and relatively hairless simians - but as a generous gesture.

Leaving With Respect

The idea that the door might remain open reinforced something I had felt from the beginning: they were genuinely cheering for me - and for every incoming candidate - to succeed.

In contrast to some of the negative accounts I had read beforehand, my experience - across both hiring and onboarding - was defined by transparency, intellectual honesty, and humanity.

Crossover’s model is demanding by design, and it will not suit everyone.

But for engineers who value clarity, high standards, and direct feedback delivered without ego, it is internally consistent, principled, and deeply human.

Five stars. I would recommend it.

- Alexander Liu Cheng

Section Separator Top

Want to read more?
We have a lot more where that came from

Crossover Logo White
Follow us on
Have a question?

Get answers to common questions using our smart chatbot Crosby.

HELP AND FAQs

Join the world's largest community of AI first Remote WorkersAI-first remote workers.