X
CNET logo Why You Can Trust CNET

Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

The Age of Promptware: How AI Hacking Is Threatening the Smart Home, and What to Do

Prompt injections have become one of the biggest emerging threats to the modern home as AI adoption grows. It's a new era of malware -- and one that requires new defenses.

Headshot of Tyler Lacoma
Headshot of Tyler Lacoma
Tyler Lacoma Editor / Home Security and Smart Home
Tyler has worked on, lived with and tested all types of smart home and security technology for over a dozen years, explaining the latest features, privacy tricks, and top recommendations. With degrees in Business Management, Literature and Technical Writing, Tyler takes every opportunity to play with the latest AI technology, push smart devices to their limits and occasionally throw cameras off his roof, all to find the best devices to trust in your life. He always checks with the renters (and pets) in his life to see what smart products can work for everyone, in every living situation. Living in beautiful Bend, Oregon gives Tyler plenty of opportunities to test the latest tech in every kind of weather and temperature. But when not at work, he can be found hiking the trails, trying out a new food recipe for his loved ones, keeping up on his favorite reading, or gaming with good friends.
Expertise Smart home | Smart security | Home tech | Energy savings | A/V
Tyler Lacoma
5 min read
A robot sits in a violet maze.

Prompt injections pose new problems for AI, but security efforts have been fast to respond.

akinbostanci via Getty Images

The modern home with its smart technology offers effective defenses against old-school hacking when well maintained. But now there's a new type of malware around, and its finding a type of vulnerability that didn't exist before -- thanks to AI. 

The threat is called prompt injections or, colloquially, promptware. It's a new type of cybercrime that can make your AI follow fraudulent commands, and it's concerning everyone from Google and Apple to OpenAI. In the smart home, that's particularly dangerous because promptware could be used to control your heating, switch off lights or even unlock connected smart locks. 


Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


Experts are still learning what dangers promptware presents to LLM-style AI and the many places it can hide. Meanwhile, there are steps you can take to help stay safe and alert. Here's what I suggest.

The rise of promptware

Gemini's Google Home integrations are useful, but command options can include some risks.

Google

Promptware or prompt injections took center stage this summer at a Blackhat conference where Tel Aviv University researchers headed by Ben Nassi demonstrated how they were able to use malicious prompts hidden in everyday messages to make Google's Gemini AI do things like open smart windows, turn on a connected boiler or send the geolocation of a user, thanks to Gemini's integration with Google Home and related apps. Inside messages were hidden carefully devised commands that boiled down to, "Hey Gemini, activate this feature and make it do this when the user types something like 'thank you' or 'goodbye' in an email."

Even worse, much of the promptware was "zero click," which meant users didn't have to click on a URL, document or message to activate it. Gemini just had to read a title or calendar message where the prompt was carefully hidden, like when it summarizes an email conversation for you.

Good news came from this: You don't currently need to worry about Gemini falling prey to these home-controlling prompts. Google was made aware of these vulnerabilities early in 2025 and set up safeguards to remove them and help prevent this type of promptware.

Google's spokesperson also told me that, "This active collaboration with white hats and security researchers is a profoundly positive development, leading to productive testing and bug hunting that makes AI systems stronger for everyone. We actively participate in and value programs like our AI Vulnerability Reward Program."

However the discovery of these vulnerabilities showed just how dangerous promptware can be and how AIs can be tricked by promptware located in the most innocuous places. It's also not an attack that can be detected by traditional virus software or firewalls. That's a problem as AIs become more developed, more present in our daily communication and more connected to our computers, home devices and phones.

I expect cybercriminals will be watching for promptware vulnerabilities that may not be caught as early as these Gemini missteps, especially as the Alexa Plus AI continues its slow rollout and Apple is in talks to upgrade Siri with Gemini AI features, too.

5 key steps to stop promptware threats

Hands on a laptop over an illustration of a digital lock.

Promptware is a new AI-based threat, but there are ways to protect your home.

Kmatta/Getty Images

If promptware/prompt injection slips past defenses just by making AI read it, how do you protect against it? Fortunately, several security practices can help -- and in the age of AI, these steps also prevent other privacy and security problems, so they're healthy habits for everyone.

Always keep your devices updated, especially in the age of AI

Updates have always been a first-line defense to patch security vulnerabilities and keep apps safer. Now, they provide important updates to the AI features that live on our phones as well, which can include new security features.

Always keep your phone's OS updated to the latest version, as well as the apps (AI or otherwise) that you use on it. Push automatic updates if settings allow you to.

Read more: My Smart Home Is Much Safer After These 5 Vital Password Changes

Don't accept or open any messages from unknown sources

Not all promptware is zero-click, and some versions need you to open or agree to something to insert the prompt where the AI will read it. Prevent this by avoiding any messages or senders that you don't recognize. Don't even open them to learn more if possible -- just delete and move on.

When I contacted Google, one thing they mentioned was, "Prompt injection attacks, while specific to AI, share a fundamental dynamic with long-standing threats like phishing in email. Both are areas where attackers will consistently probe for new vulnerabilities." Like phishing, it's best to remove and report than take any risks.

Don't ask AI to summarize anything you don't already know well and trust

In many cases, AI won't actually read the prompt unless it's ordered to do so. That can include summarizing emails or texts, creating calendar events, summarizing online documents and so on. To avoid promptware, it's best to avoid asking AI to summarize a bunch of messages that you could go through yourself.

gemini AI organizing gmail

Be careful letting AIs access too many unknown messages.

Screenshot by James Martin/CNET

Disable AI in your email, calendars, chat apps and other places you can get messages

Promptware has to come from somewhere, even if it doesn't always require you to click on a link. One often effective way to prevent it from taking control of connected devices it tp make sure your chosen AI doesn't "see" any prompts.

To that end, see if you can disable AI features in your email, messages (like text message summaries), and productivity apps like calendars to greatly lower the risks of any kind of promptware taking control.

If you can create detailed settings, you can switch AI to only do things when prompted, so you can still retain certain benefits. This is the HITL or Human-in-the-Loop defense where a human must give AI permission to act so it doesn't run across any promptware on its own.

Don't just copy and paste email subject lines, file names or code

Promptware often hides at the edges of lengthy descriptions, email subjects, file names and code snippets you may be tempted to copy and paste when you're organizing or transferring data. It saves time, but I recommend getting into the habit of checking all those titles and descriptions first to make sure there aren't weird commands hiding at the tail end.

For more, check out why I like AI in home security, the latest moves to protect kids from AI and why you shouldn't use AI as a therapist.