If ChatGPT can perfect your cover letter, what can it do to a forged invoice?
People at home face LLM abuse via:
π§ Hyper-polished phishing emails and texts now bypass typo-spotting instincts we rely on.
π¬ AI-powered "sweethearts" nurture romance scams all day long.
π Fake reviews influencing shopping choices with believable five-star lies.
ποΈ Voice or video deepfakes begging relatives for "urgent" transfers.
π Personal data being used to fuel and improve all of the above attack techniques.
People at Work have to be conscious of:
ποΈ Source code, contracts, or HR files dropped into public LLMs to remain on someone else's cloud.
πΆοΈ Unvetted plug-ins and API keys that might create unseen "shadow AI" entry points.
π Hallucinated or inaccurate texts that creep into reports and repositories.
Companies need to be aware that:
π Roughly 40% of BEC emails are now AI-generated, accelerating wire-fraud schemes[ref].
ποΈ Prompt-injection hijacks AI agents and triggers rogue actions or responses.
βοΈ Privacy, copyright, and export-control violations attract regulatory scrutiny.
β Coordinated fake-review floods can tank - or rocket - brand reputation.
π§ͺ Data poisoning steers fine-tuned models toward harmful or undesired responses and decisions.
Fraudsters adopted LLMs as quickly as everyone else. Version 1.0 generates perfect emails and operates simple chatbots; subtler, blended exploits are already brewing.
π‘ At home: Double-check any urgent money request over a separate, trusted channel; limit what you share with chatbots; enable MFA.
π‘ At work: Use approved AI tools, remove any sensitive content before prompting, and review AI output as rigorously as a junior analyst's draft.
π‘ Companies: Apply least-privilege AI access, confirm high-risk actions out-of-band, monitor for adversarial prompts, and red-team your models. (Not an exhaustive list, but the essentials.)