There would be competition from API wrappers, if you want to pay there will always be lots of options to chat without ads. I hate to think what they and others might come up with to try and thwart this.
I think ads will take the form of insidious but convincing product placement invisibly woven into model outputs. This will both prevent any blocking of ad content, and also be much more effective: after all, we allude to companies and products all the time in regular human conversation, and the best form of marketing is organic word-of-mouth.
I just saw a sibling post about Kagi, maybe this is how the industry will end up, with a main provider like OpenAI and niche wrappers on top (I know Kagi is not just a google wrapper but at least they used to return google search results that they paid for).
I thought you were going to say “that comment recommending Kagi is exactly what those ads would look like: native responses making product recommendations as if they’re natural responses in the conversation”
That is a weird definition of advertising. It's not an ad if I mention (or even recommend) a product in a post, without going off-topic and without getting any financial benefit.
The New American Oxford Dictionary defines "advertisement" as "a notice or announcement in a public medium promoting a product, service, or event." By that definition, anything that mentions a product in a neutral light (thereby building brand awareness) or positive light (explicitly promotional) is an ad. The fact that it may not be paid for is irrelevant.
A chatbot tuned to casually drop product references like in this thread would build a huge amount of brand awareness and be worth an incredible amount. A chatbot tuned to be insidiously promotional in a surgically targeted way would be worth even more.
I took a quick look at your comment history. If OpenAI/Anthropic/etc. were paid by JuliaHub/Dan Simmons' publisher/Humble Bundle to make these comments in their chatbots, we would unambiguously call them ads:
Precisely; today Julia already solves many of those problems.
It also removes many of Matlab's footguns like `[1,2,3] + [4;5;6]`, or also `diag(rand(m,n))` doing two different things depending on whether m or n are 1.
(for the sake of argument, pretend Julia is commercial software like Matlab.)
> Name a game distribution platform that respects its customers
Humble Bundle.
You seem like a pretty smart, levelheaded person, and I would be much more likely to check out Julia, read Hyperion, or download a Humble Bundle based on your comments than I would be from out-of-context advertisements. The very best advertising is organic word-of-mouth, and chatbots will do their damndest to emulate it.
I don’t know how subtle or stealth you can be in text. In movies, there’s a lot of stuff going on, I may not particularly notice, I’m going to notice “Susie, while at home drinking her delicious ice cold coca-cola….”
> I’m going to notice “Susie, while at home drinking her delicious ice cold coca-cola….”
It will be much more subtle. Asking an LLM to help you sift through reviews before you spend $250 on some appliance or what good options are for hotels on your next trip…
Basically the same queries people throw into google but then have to manually open a bunch of tabs and do their own comparison except now the llm isn’t doing a neutral evaluation, it’s going to always suggest one particular hotel despite it not being best for your query.
Not all answers are conducive to such subtle manipulation, though. If the user asks for an algorithm to solve the knapsack problem, it's kind of hard to stealthily go "now let's see how many Coca Colas will fit in the knapsack". If the user asks for a cyberpunk story, "the decker prepared his Microsoft Cyberdeck" would sound off, too.
Biasing actual buying advice would be feasible, but it would have to be handled very carefully to not be too obvious.
Right, I just don’t see how it can be subtle, maybe it will be the opposite where I assume things are ads that aren’t, but any time I see a specific brand or solution I will assume it’s an ad.
It’s not like a movie where I’m engrossed by the narrative or acting and only subliminally see the can of coke on the table (though even then)
Maybe image generation ads will be a bit more subtle.
You have no guarantee the API models won’t be tampered with to serve ads. I suspect ads (particularly on those models) will eventually be “native”: the models themselves will be subtly biased to promote advertisers’ interests, in a way that might be hard to distinguish from a genuinely helpful reply.
> You have no guarantee the API models won’t be tampered with to serve ads. I suspect ads (particularly on those models) will eventually be “native”: the models themselves will be subtly biased to promote advertisers’ interests, in a way that might be hard to distinguish from a genuinely helpful reply.
I admit I don't see how that will happen. What are they gonna do? Maintain a model (LoRA, maybe) for every single advertiser?
When both Pepsi and Coke pay you to advertise, you advertise both. The minute one reduces ad-spend, you need to advertise that less.
This sort of thing is computationally fast currently - ad-space is auctioned off in milliseconds. How will they do introduce ads into the content returned by an LLM while satisfying the ad-spend of the advertiser?
Retraining models every time a advertiser wins a bid on a keyword is unwieldy. Most likey solution is training the model to emit tokens represent ontological entries that are used by the Ad platform so that "<SODA>" can be bid on by PepsiCo/Coca-Cola under food > beverage > chilled > carbonated. Auction cycles have to match ad campaign durations for quicker price discovery, and more competition among bidders
More akin to something like the twitter verified program where companies can bid for relevance in the training set to buy a greater weight so the model will be trained to prefer them. Would be especially applicable for software if azure and aws start bidding on whose platform it should recommend. Or something like when Convex just came out to compete with depth of supabase/firebase training in current model they could be offered to retrain the model giving a higher weight to their personally selected code bases given extra weight for a mere $Xb.
Companies pay for entire sports stadiums for brand recognition. That’s also not something you can change on the fly, it’s a huge upfront cost and takes a significant effort to change. That doesn’t stop it from happening it’s just a different ad model.
Companies will pay OpenAI to prioritize more of their content during training. The weights for the product category will now be nudged more towards your product. Gartner Magic Quadrant for all businesses!