Written By Ahmed Elmallah
This blog is part two of a two-part series on patents and AI technology. Part one, Exponential Increases In Artificial Intelligence Patent Filings, discussed the growing number of filed and granted AI patents, including some recent examples.
In this blog, we consider the challenges in patenting artificial intelligence (AI) and the key considerations companies should make when drafting strong AI patent applications.
Challenges In AI Patenting
Similar to most software-related patents, AI applications face a number of important hurdles. Three common challenges are outlined below.
Is It Patentable Subject-Matter?
Mathematical algorithms are not patentable in many jurisdictions. Implementing a mathematical algorithm using a computer does not, by itself, convert the algorithm into patentable subject-matter. At some level, machine learning models are simply mathematical algorithms embodied in a computer.
Is It New and Non-Obvious?
Machine learning models—and their use or application—should be new, and/or otherwise an un-obvious extension of previous work.
Is It Sufficiently Described?
Stating that a method and system uses "any machine learning", with no further detail, is unlikely to satisfy the threshold requirement that a patent application provide sufficient disclosure to support a claimed invention.
What to Consider When Drafting Strong AI Applications
The following are a number of key considerations for drafting stronger AI applications to mitigate the above-noted hurdles:
Solving a Real-World Problem
Demonstrating how the AI solves a real-world problem is an important factor to representing patentable subject-matter in many jurisdictions.
European patent practice, for example, emphasizes "Applied AI", which shows how the AI model is applied to improve—or solve—a specific technical problem. This helps to establish that the invention provides a technical contribution and is not a mere mathematical algorithm.
To that end, demonstrating how known machine learning models are adapted to new use-cases can also show novelty.
Example technical applications for AI include:
- applying AI to image processing; or
- applying AI for medical heartbeat monitoring.
Improving a Computer Function
Even if the AI is not applied in a practical way, an AI invention can present other types of improvements to make it patentable subject-matter. For example, if the improvement relates to enhanced computer functioning, this can also demonstrate a physical result beyond a mere mathematical formulae. This type of framing is better suited for "Core AI" type inventions. That is, an invention in the actual AI model, rather than its application to a specific use-case.
Examples of computer improvements include:
- adapting an AI model to accommodate for unique low processing hardware applications; or
- reducing or optimizing certain parameters of the model to generate faster computer outputs while maintaining or improving predictive accuracy (e.g., reducing size or layers of networks, generating outputs using less training data, etc.).
Surrounding Hardware Environment
Machine learning models are not usually applied in a vacuum, but are often applied in a specific environment context. For example, models are applied in contexts involving receiving physical data from sensors and processing that data. This is related to the specific use-case of the model.
Notably, describing this hardware environment can add a physical dimension to the claimed invention, which also assists in overcoming the patentable subject-matter hurdle. Further, it can add a dimension of novelty to the overall concept.
Examples of surrounding hardware environments include:
- obtaining ECG and other biometric data from physical sensors, attached to the patient, and feeding that into a real-time heartbeat monitoring AI; or
- acquiring images from a physical camera mounted to a vehicle, and feeding that into a real-time object detection AI.
Post-Processing of Model Output
Still further yet, how the output(s) of the model are further processed can also address patentable subject-matter and novelty hurdles. For example, if the output is used to impart a unique physical change in the real-world, this can demonstrate something more than a mathematical algorithm—and can also contribute to the novelty of the overall concept.
Examples of post-processing of model outputs include:
- using the output of an image processing AI to effect feedback control on the motion of a vehicle; or
- using the output of a voice recognition AI, e.g., in a home assistance system, to effect feedback control on the temperature or lighting inside a house.
Describing the Trained Model
Despite the temptation, it is often insufficient to broadly describe that "any" machine learning model can implement the idea. Even if this is generally true, the application should at least describe one possible implementation to meet the disclosure requirements in many jurisdictions. Further, as noted above, there may also be novelty in developing and describing a new model architecture.
Example considerations here include:
- describing the configuration of model layers (e.g., use and arrangement of certain types of layers);
- if the model is an "off-the-shelf" model, then describing any adaptations to accommodate specific use-cases, or applications;
- describing the type of input data required for the model, as well any pre-processing of the input data (e.g., labelling of the data); and/or
- describing the types of generated output data.
How Is the Model Trained?
Finally, explaining how the machine learning model is trained can also, in some cases, be an important factor to meeting the sufficient disclosure requirement. Additionally, there may again be added novelty in how the model is trained to accommodate for a specific use-case, or otherwise, how it is trained to generate a more accurately trained model.
Example considerations here include:
- for supervised learning, the type of supervised learning model used and/or factors to determine when the model is trained;
- the type and/or selection of training data (e.g., selection of unique datasets);
- any special pre-processing to the training data to generate a better trained model (e.g., filtering or labelling of the data); and/or
- the method and/or circumstance under which the training dataset points are acquired.
If you have any questions about AI patents, please reach out to the Bennett Jones Intellectual Property Law group.
Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors to explore how we can help you navigate your legal needs.
For permission to republish this or any other publication, contact Amrita Kochhar at kochhara@bennettjones.com.