Assignment Desk

Algorithmic Hiring Transparency: How Assignment Desk Complies with NYC Local Law 144 and Beyond

TL;DR

How Assignment Desk's deterministic crew ranking system meets and exceeds algorithmic transparency requirements under NYC LL 144, the Colorado AI Act, and the EU AI Act.

Governments around the world are waking up to the fact that algorithms now make consequential decisions about people's livelihoods. In hiring and staffing, automated tools determine who gets considered, who gets ranked, and who gets the job. A growing body of legislation demands that these tools be transparent, auditable, and free from discriminatory bias.

Assignment Desk's crew ranking system was built with these requirements in mind — not as an afterthought, but as a foundational design principle. Here is how we comply with current laws and why we believe our approach exceeds every requirement on the books.

NYC Local Law 144 — Automated Employment Decision Tools

New York City's Local Law 144, effective July 2023, is the most significant piece of algorithmic hiring legislation in the United States. It applies to any employer or staffing agency that uses an "automated employment decision tool" (AEDT) to screen or rank candidates in New York City.

The law requires:

  • Annual bias audits — An independent auditor must test the tool for disparate impact across race, ethnicity, and gender categories.
  • Published audit results — The bias audit summary must be posted publicly on the employer's website.
  • Candidate notice — Job applicants must be notified at least 10 business days before the tool is used, told what job qualifications the tool assesses, and informed of their right to request an alternative selection process or accommodation.
  • Data transparency — Candidates must be told what data the tool collects and retains, and how it is used.

Colorado AI Act (SB 24-205)

Colorado's Artificial Intelligence Act, signed in 2024 with key provisions taking effect in 2026, goes further than NYC LL 144 by covering a broader range of "high-risk AI systems" — including those used in employment decisions. The Act requires:

  • Impact assessments — Developers and deployers of high-risk AI must conduct and document impact assessments evaluating the system's risks, including algorithmic discrimination.
  • Consumer disclosure — Individuals must be notified when an AI system is being used to make or substantially contribute to a consequential decision about them.
  • Right to explanation — Affected individuals can request an explanation of the AI system's decision.
  • Right to appeal — There must be a process for individuals to challenge AI-driven decisions.

Illinois Artificial Intelligence in Hiring Act (AIPA)

Illinois was an early mover in regulating AI in employment. The state's Artificial Intelligence Video Interview Act (2020) and subsequent amendments require employers who use AI to analyze video interviews to obtain consent and explain how the AI works. The broader AIPA framework extends these principles to any AI-assisted hiring or ranking tool, requiring notice, consent, and basic transparency about what the tool evaluates.

EU AI Act — High-Risk Employment Systems

The European Union's AI Act, which entered into force in 2024 with a phased implementation through 2026, classifies AI systems used in employment as "high-risk." This triggers the most stringent requirements in the Act:

  • Conformity assessments before deployment
  • Human oversight requirements ensuring a qualified person can override AI decisions
  • Technical documentation detailing the system's purpose, accuracy, and limitations
  • Transparency obligations requiring deployers to inform affected individuals
  • Fundamental rights impact assessments for public-sector deployers

While the EU AI Act primarily applies to systems operating within the EU, it sets the global benchmark that other jurisdictions are moving toward.

How Assignment Desk Meets and Exceeds These Requirements

Assignment Desk's master rating system was designed to be compliant with all of the above — and to go further. Here is why:

Deterministic, Not Machine Learning

Our crew ranking system is a deterministic formula, not a machine learning model. It does not train on data, it does not discover patterns autonomously, and it does not produce outputs that even its developers cannot explain. Every factor in the score is defined explicitly: tier base score, Bayesian-smoothed ratings, badge points, profile completeness, booking history, recency, and equipment. The formula is fixed, auditable, and reproducible.

This is a deliberate architectural choice. ML-based hiring tools are where bias risk is highest — the model can learn to correlate protected characteristics with scoring outcomes without anyone intending it to. Our system cannot do this because it does not learn at all. It applies the same formula to every crew member every time.

Fully Explainable

The complete formula is published on our website. Any crew member can see exactly how their score is calculated, what each factor contributes, and how they can improve. This exceeds the "explanation on request" standard because we do not wait for requests — the explanation is proactive and permanent.

No Demographic Data in Scoring

The master rating formula does not use age, gender, race, ethnicity, religion, disability status, or any other protected characteristic as an input. It cannot produce disparate impact through proxy variables because the only inputs are professional factors: ratings, profile quality, booking history, equipment, and platform tier.

Crew Can See Their Score

Every crew member has access to their master rating score, its component factors, and specific guidance on how to improve it. This is not a "you can request your score" situation — it is displayed in your portal dashboard at all times.

Human Override

Production coordinators always have the ability to override algorithmic rankings. The crew directory ranking is a suggestion — coordinators can and do select crew members based on their own professional judgment, relationships, and preferences. The algorithm informs human decisions; it does not replace them.

We Do Not Just Comply — We Exceed Every Requirement

The crew staffing industry is behind the curve on algorithmic transparency. Many platforms use opaque ranking systems that they will not explain, cannot audit, and refuse to discuss. Some use ML models that even their own engineers cannot fully explain.

Assignment Desk took the opposite approach. We built a system that is transparent by design, deterministic by architecture, and explainable by default. Not because the law forced us to — although we are fully compliant — but because we believe that any system that determines whether a working professional gets booked should be one that working professional can understand and influence.

Read the full details at assignmentdesk.com/transparency.

Need a Production Crew?

Assignment Desk provides professional camera crews in 24+ cities nationwide.

Book A Crew