Behind the Build May 7, 2026 7 min read

Off the Rails: What I Refused to Let My AI Say

The first question every dealer asks me about Jordan isn't "how does it work." It's some version of: "Yeah, but what stops your AI from telling some kid he's pre-approved, or quoting a $1 truck on the lot?" Here's the answer, in detail. Built before Jordan said a single word to a real customer.

If you've been around dealer Twitter (or whatever we're calling it these days) for the last twelve months, you've seen the screenshots.

A national brand's chatbot agreeing — in writing — to sell a 2024 Tahoe for one dollar. Another one promising a customer "definite approval, regardless of credit." Another quoting a 1.9% APR on a brand the manufacturer hasn't subvented in three years. Each one went viral. Each one got the dealer a phone call from corporate, a visit from the state attorney general's office, or both.

None of those dealers wanted their AI to say those things. They just plugged a chatbot into their website, trusted that the model would be "smart enough not to" — and learned the hard way that smart isn't the same as accountable.

I knew before I let Jordan answer a single inbound lead at House of Carz that I was going to have to solve this myself. So I did. The result is a layer in the codebase called aiGuardrails, and it runs in front of every word Jordan tries to send.

This is the post about what it does, why each piece exists, and the design choice underneath all of it.

The four things Jordan will never say on his own

Guardrails is a deliberately dumb wall — by design. There are four categories of statement Jordan is forbidden from sending without a human being in the loop. They're the four things I, as a dealer, would never let a brand-new BDC rep say on day one of their job either.

1. Hard guarantees and "you're approved"

Anything that sounds like a binding promise. "I guarantee," "I promise," "you're definitely approved," "we'll approve you," "regardless of credit," "no credit check." Also: "lifetime warranty," "warranty is covered," "free car." If Jordan tries to write any of those, the message is held back and a human takes over.

Why these specifically? Because they're the phrases that turn a sales conversation into a contract dispute. The FTC's CARS Rule went into effect for exactly this category of language. A salesperson who says "you're approved" to a customer — and then F&I can't actually deliver — has just handed that customer a complaint to file.

2. Made-up financing terms

Anything resembling an APR, an interest rate, or a monthly-payment quote. "2.9% APR," "at 3.5 percent," "monthly payment of $389," "$295/mo." Jordan doesn't have access to your lender approval system. He doesn't know what tier the customer's going to land in. So he doesn't get to guess.

This is the one that gets a dealer in trouble fastest, in my opinion. State attorneys general love a financing-misrepresentation case — they're easy to prove and they generate good press for the AG. The same applies under the federal Truth in Lending Act. It is not worth the math.

3. Trade-in values without an appraisal

"I can give you $8,000 for your trade." "Your truck is worth $14,500." Anything that puts a number on a vehicle Jordan has never seen. Trade values are the most contested number in the whole car business — they depend on photos, history reports, condition checks, and the day's wholesale market. A bot quoting one is a bot writing checks the dealership can't cash.

4. Prices that aren't actually on the lot

This one is the most subtle and the one I spent the most time on. Jordan can absolutely tell a customer that the 2018 Silverado is listed at $24,495 — because it is listed at $24,495 in inventory. What he can't do is invent a number. If Jordan tries to send a price, the guardrail layer pulls live inventory from the database and confirms that the dollar figure he's about to send actually matches a real vehicle on the lot, within a $100 tolerance. If it doesn't match, the message gets blocked.

That tolerance exists for one reason: I don't want Jordan blocked because he wrote "about $24k" instead of "$24,495." The point isn't to be a regex bully. The point is to make sure no number leaves the building that isn't backed by a real listing.

Jordan won't say
  • "You're approved" / "we'll approve you"
  • "Guaranteed financing, regardless of credit"
  • Specific APRs, rates, or monthly payments
  • Trade-in dollar values without an appraisal
  • Prices below your minimum vehicle floor
  • Prices that don't match real inventory
Jordan will say
  • "Let me grab our sales manager on this one"
  • The actual listed price of a vehicle on the lot
  • Vehicle specs, mileage, options, availability
  • "Want me to set up an appraisal on your trade?"
  • "Our finance team will pull a soft credit check first"
  • Times, dates, and confirmed appointment slots

Why I didn't use AI to police the AI

Here's the part that took me the longest to be honest with myself about.

The fashionable way to build this kind of safety layer right now is to make a second AI call. You take the message Jordan wants to send, you hand it back to a model with a prompt like "is this dealer-safe? answer yes or no." Then you only let the message through if the second model says yes.

I tried that. It works most of the time. But "most of the time" isn't the standard a dealer should hold an AI to, and here's what I learned: a model can be talked into approving its own bad answer. Customers don't even have to be malicious. The wrong combination of context and a confident-sounding phrasing, and the second-pass model nods along.

A model can be talked into approving its own bad answer. Regex can't.

So Jordan's guardrails are not powered by AI. They're powered by old-fashioned, deterministic pattern matching. Regular expressions and dollar-amount comparisons against a SQL query. The detectors are pure functions — they don't make API calls, they don't reason, they don't have feelings about context. If your message matches "you're approved," it gets stopped. Every time. Forever. There is no prompt you can write that turns the wall off.

This is, in software-engineering terms, the dumbest possible solution. That's the whole point.

What happens when Jordan gets blocked

The interesting question isn't "how do you catch the bad message" — it's "what do you do once you've caught it." A bot that just refuses to respond is its own kind of disaster. The customer on the other end is sitting there waiting and getting nothing.

So the guardrail layer doesn't just block. It rewrites and escalates, in three steps:

  1. The customer gets a real reply, immediately. Something to the effect of: "Let me grab our sales manager on this — they'll text you back shortly." It's tailored to the channel — the SMS version is shorter, the email version reads like email. The customer is never left hanging.
  2. The conversation is flagged for human handoff. Inside the dealer dashboard, that thread now has a "needs human" badge on it. The next time someone on the team opens the inbox, it's the first one they see.
  3. The dealer gets an alert email. Subject line: "Guardrail HIGH: [Dealer Name] — unauthorized_guarantee." Body: what the customer asked, what Jordan was about to send, why it was blocked, and the conversation ID. So you can review and continue manually without hunting for the thread.

Every single one of these events is logged to a `guardrail_violations` table in the database. We track when they fire, what category, what severity, what the proposed text was, what we replaced it with, and whether the alert email actually delivered. That table is the audit trail that says: when something close to going wrong almost went wrong, here's exactly what happened and what we did about it.

The configurable parts (because not every dealer is the same)

I'm an independent. My policy at House of Carz is "the AI never quotes prices that aren't on the lot, period." But I know franchise dealers and bigger groups have different needs — some want a small discount window the AI can play in; some want every price question routed to a human no matter what.

So three things are configurable per-dealer:

The defaults are the strict ones. You have to explicitly opt into looser behavior, and even log_only still records every violation — so the dealer can come back the next morning and see exactly what would have been blocked.

The design rule under all of it

There's a comment in the source file for this layer that I think captures the whole philosophy in one line:

It is better to be briefly embarrassing than to legally commit a dealership to a dollar Tahoe.

If Jordan ever has to say "let me grab our sales manager on this one," the worst case is that the customer waits ten minutes for a human reply. That's annoying. It's not catastrophic. It's recoverable.

The other failure mode — Jordan confidently telling a customer they're approved for a loan they aren't going to get, or quoting a price the dealer can't honor — that one isn't recoverable. That's a refund, a complaint, a Better Business Bureau ding, maybe a lawsuit, maybe a state regulator calling. There's no "oh, sorry, the bot was confused" version of that conversation that ends well.

So I biased every decision in favor of "Jordan shuts up and gets a human." Every false positive — every time Jordan hands off a thread he probably could've handled — is a feature, not a bug. The cost of being too cautious is a slightly slower reply. The cost of not being cautious enough is a regulator on the phone.

I'd rather be early to the handoff than late to the lawsuit.

Why this is the dealer-AI question that matters most

Most of the AI tools being sold to dealerships right now are CRM features bolted into a chat window. The team that built them isn't a car dealer. They didn't grow up around F&I. They've never sat across from a customer holding a deal jacket and explaining why the rate is what it is. So the safety layer, when one exists at all, is generic.

That isn't enough. Selling a car is a regulated transaction. The federal CARS Rule, the FTC, your state AG, the lender, the manufacturer, your insurance carrier — all of them care about what gets said to a customer in writing during the lead-to-close window. The bot is talking on your behalf. Whatever it says is going to be treated, by every one of those parties, like you said it.

An AI BDC without serious guardrails is malpractice. I'm not being cute when I say that. The risk is real, and most of the things being sold right now don't take it seriously enough.

That's the gap LotLink is trying to fill. Not "AI for dealers" — built by an ad-tech company with a side dealer vertical. AI built for dealers, by someone who has to live with what gets said on his own lot first.


I sell cars in Rochester, Indiana. If you want to see Jordan's guardrails working in your own dealership, the pricing page has every plan and what's included. Or call me at 260-229-9393 — I answer my own phone, and I'm happy to walk you through what we block, what we don't, and why.

Want to see exactly what Jordan can and can't say?

The dealer pricing page lays out every plan, what's included, and the math. Or call me at 260-229-9393 — I'll walk you through the guardrail config live. If you're a dealer who's been burned by an AI tool before, I want to hear about it.

← Back to all posts