Harrison Pensa LLP

09/25/2025 | Press release | Distributed by Public on 09/25/2025 11:18

AI Hype: Useful tool or overblown trend

Artificial intelligence gets different reactions from different people. AI might be game-changing technology that will improve the human condition, a dystopian threat to humanity, a purveyor of harmful and misleading AI slop, or a useful but overhyped tool with a bunch of risks to manage. It could perhaps be all of those at once.

Understanding AI hype

Today I'd like to ponder the thought that AI is a useful but overhyped tool with a bunch of risks to manage.

Like most emerging tech, AI is going through the typical Gartner hype cycle. Early on, there are inflated expectations, then we get disillusioned with the tech, then we sort it out and make effective use of it. Examples include the dot-com bubble and blockchain.

Why AI feels overhyped

There is definitely a lot of hype and inflated expectations about AI at the moment. Everyone seems to be selling an AI product of some kind, or at least a product that claims to have an AI component.

Cory Doctorow wrote an interesting article saying we need AI as centaurs, but are being served with reverse centaurs. In other words, AI should be a tool that we control, not a product that controls us.

Morton Rand-Hendriksen expressed a similar sentiment when he said "'I'll use this hammer to build something!' said nobody building anything useful. Starting with 'l'll use this AI to build a product' gets you nowhere!"

In a similar vein, I've expressed before that Canada's anti-spam law is like using a sledgehammer to kill a fly in a China shop. AI is sometimes used and abused the same way.

A useful tool example is AI that analyses things we have written before, or articles others have written, to see what has been said on a topic. But in doing that, we need to understand the capabilities, potential flaws, and risks involved.

We need to understand how to use the output to avoid breaching copyright laws.

The risks behind AI hype

AI is notorious for hallucinating, a somewhat polite way of saying that it makes a lot of mistakes. We need to understand how to fact-check the output and not take the output at face value. One of the challenges of course is that when we see AI output we know something about, it is easy to spot the mistakes. Not so easy when the output isn't something we are familiar with.

There is a saying that if you have a hammer, everything looks like a nail. At the moment, it seems that everything looks like something AI should be used for. We all know you don't use a sledgehammer to drive a finishing nail, and you shouldn't use a hammer to drive a screw. And to drive a screw, sometimes a plain old screwdriver works as well or better than an impact driver.

The right tool needs to be used for the right job. A big difference is that choosing and using hand tools is a lot easier to understand than the mysterious workings of AI. And the results if you get AI wrong can be more serious than putting a dent in your work or bruising your finger.

Want to learn more? Explore our other articles on AI and the law.

AI Insights

David Canton is a business lawyer and trademark agent at Harrison Pensa with a practice focusing on technology, privacy law, technology companies and intellectual property. Connect with David on LinkedIn and Twitter.

Harrison Pensa LLP published this content on September 25, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 25, 2025 at 17:18 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]