Newsletter

The Talent Brief

Bi-weekly recruiting insight from Adam Kovacs. No filler. Each issue covers one specific hiring problem, what causes it, and what to do about it.

Issue 4

The AI sourcing tool is not the problem

Bi-weekly | Week 11

Every few months, a new sourcing tool promises to find better candidates faster. The teams that plateau are not using the wrong tool. They are using the right tool with the wrong brief.

Get the next issue

New issues every two weeks. Each one covers one problem in growth-stage hiring: what causes it, what the data says, and what to do next.

Issue 4

The AI sourcing tool is not the problem

Bi-weekly | Week 11 | Week 11 of posting calendar

Every few months, a new sourcing tool promises to find better candidates faster. The teams that buy it get a brief improvement in pipeline volume, then plateau. The ones that do not buy it are frustrated that the ones who did are moving faster.

The tool is not the variable.

Two recruiters using the same AI sourcing tool with two different briefs will produce fundamentally different pipelines. The recruiter with a sharp brief (role defined, skills stack-ranked, comp calibrated, target companies mapped) will produce a pipeline worth reviewing. The recruiter with a vague brief will produce a fast wrong pipeline.

AI sourcing tools are multipliers. They make whatever you put in come back faster and in higher volume. A clear brief becomes a strong pipeline quickly. A vague brief becomes a flood of irrelevant profiles quickly.

The question before selecting or evaluating a sourcing tool is not "what can this tool find?" It is "how good is the brief we are giving it?" If the brief is weak, a better tool will not help. It will make the problem bigger.

The teams that get the most out of AI sourcing tools share one practice: they treat the brief as a product. They write it, review it, and revise it before sourcing starts. They track which brief elements predict pipeline quality and adjust over time.

Before your next tool evaluation, run this test. Give your current tool your best brief and your worst brief. Look at the difference in pipeline quality. That gap is your actual problem. No tool change will close it faster than improving how you brief.

Adam

Issue 3

Why your best technical candidates drop off at the technical screen

Bi-weekly | Week 9 | Week 9 of posting calendar

Late-stage candidate dropout is a sourcing problem, not an assessment problem.

When most candidates who clear the recruiter screen still fail the technical bar, the instinct is to fix the assessment. Make it harder. Make it more relevant. Add a take-home. The real question is why those candidates were in the pipeline in the first place.

The technical screen is exposing a misalignment that was present at round zero, before sourcing started. The criteria the recruiter used to qualify candidates do not match the criteria the technical team uses to evaluate them. No one noticed because the mismatch only shows up at round three.

The fix happens before sourcing starts. Before a sourcer searches a single profile, the recruiter and the hiring manager should agree on the two or three non-negotiable technical capabilities that will determine the outcome of the technical screen. Not a full competency model. Not a 12-point rubric. Two or three things. If a candidate cannot demonstrate those things, they should not be in the pipeline.

This calibration takes 30 minutes. Most teams skip it because they are already behind on the search. They pay for that decision in weeks of rework.

The side effect of running this conversation early is that it also improves the technical screen itself. Once the hiring manager has said out loud what they are actually assessing for, the screen tends to get sharper and shorter.

If your pass rate at the technical screen is under 50 percent, start with the intake calibration. The screen is probably fine. The brief is the problem.

Adam

Issue 2

Your job description is not a document. It is a sourcing filter.

Bi-weekly | Week 7 | Week 7 of posting calendar

Most job descriptions are written to get approved, not to attract candidates.

They go through three rounds of internal review. The HR team adds compliance language. The hiring manager adds aspirational requirements. The recruiter publishes what survives. The resulting document describes an ideal candidate who does not exist and a job that sounds like every other job in the category.

This matters because the job description is the first sourcing filter. The language you use determines which candidates respond, which sourcing channels work, and which profiles your ATS surfaces. A JD written for internal approval produces an applicant pool misaligned with what the team actually needs.

Three edits that immediately improve applicant quality:

  • Replace the requirements list with a problem statement. Instead of "5+ years of experience in X," write "In your first 90 days, you will need to do Y. Here is what that looks like." Candidates self-select more accurately. You get fewer applications and more of the right ones.
  • Cut the aspirational requirements. Every item in the "nice-to-have" section that is really a "must-have" is a filter you forgot to apply. Every item that is genuinely optional is noise that inflates the search. Decide which is which before you post.
  • Write the comp range before you write anything else. If you cannot agree internally on the range, the search should not start yet. Publishing a vague range or no range at all wastes time: yours and the candidate's.

None of this requires a new tool or a new process. It requires a harder conversation at the beginning of the search instead of a painful one at the end.

If you want a JD template that is built around this logic rather than around compliance language, reply and I will send it over.

Adam

Issue 1

The intake meeting you skipped is why you are still hiring for that role

Bi-weekly | Week 5 | Week 5 of posting calendar

Most companies measure recruiting performance at the wrong end of the process.

They track time-to-offer, not time-to-brief. They track application volume, not brief quality. By the time a search is running slowly, the problem is three weeks upstream.

Time-to-fill for specialized technical roles often runs well past the 44-day median SHRM reports across all roles. Most of that delay is not lost at the offer stage. It is lost at the intake stage: the 45-minute meeting that most teams skip, abbreviate, or turn into a document hand-off.

Here is what happens when intake is skipped. The sourcer searches for a candidate who matches a job description written by a hiring manager who last hired two years ago, for a role the team cannot fully define yet. The pipeline fills with plausible profiles that do not actually fit. Round three reveals the misalignment. Everyone starts over.

A structured intake meeting cuts this cycle before it begins. The questions are not complicated:

  • What does "senior" actually mean on this team, right now?
  • Which skills are required versus preferred?
  • What does success look like in the first 90 days?
  • Who has final say on the hire, and what do they weight most?
  • What has made the last three hires succeed or fail?
  • What is the realistic comp range, and has it been calibrated to the current market?
  • What is the actual urgency, and what happens if this role is not filled in 60 days?

Seven questions. Forty-five minutes. Companies that run this conversation typically fill specialized roles in weeks, not months.

If you want the full intake framework we use with clients (including the facilitation guide and a template for the brief that comes out of the meeting), reply to this email and I will send it directly.

Adam