Join the Newsletter

Stay up-to date with food+ag+climate tech and investment trends, and industry-leading news and analysis, globally.

Subscribe to receive the AFN & AgFunder
newsletter each week.

Precision agriculture
Image credit: istock-ChatkarenStudio

Guest article: AI can transform precision agriculture, but what are the legal risks?

July 15, 2024

Dr. Siegmar Pohl is a partner at the San Francisco office at law firm Kilpatrick; Jordan Glassman is an associate at the firm’s Raleigh office.

The views expressed in this article are the authors’ own and do not necessarily represent those of AgFunderNews.


Precision agriculture has the power to disrupt farming as we know it, and it will be driven by artificial intelligence (AI) and machine learning (ML). But farmers, their suppliers and purveyors of precision ag technologies also need to be aware of the associated legal risks.

Consider the example of orchard management software that uses an AI model to output precision recommendations for pesticide applications. If the AI model recommends application of a pesticide at a concentration in violation of a government regulation, where does liability lie?

Precision agriculture
Image credit: istock/Chatkaren Studio

What is precision agriculture?

Precision agriculture involves using cutting-edge technologies such as robotics, cloud computing, smart sensors, actuators and artificial intelligence (AI) to enhance and transform traditional farming techniques. For example, precision agriculture can be used to determine optimized farming solutions applicable to certain plants or portions of a field on a particular day of the season.

These techniques have multiple benefits, including improved crop yields, more value and profit potential from arable land, less intensive practices and significant environmental gains. AI will de-risk farming in an increasingly volatile operating environment and relieve labor shortages.

AI has taken off and farmers, their suppliers and in particular, purveyors of precision agriculture technologies have taken notice. AI as employed in precision agriculture typically involves the subset of AI known as machine learning (ML). In its simplest distillation, ML is about pattern recognition. ML refers to a constellation of algorithms for identifying patterns in data and making probabilistic predictions from that data.

In this respect, AI is commonly understood as a computer program that takes in one or more inputs, like an image, audio recording or table of data, and outputs some prediction or physical action. In some applications, these predictions may be used to inform automated decision-making systems.

For example, AI is used for such diverse applications as disease and pest detection and control based on aerial imagery from satellites and agricultural drones; development of new crop traits; automation of harvesting robots; monitoring and management of livestock, plants and soil; or predicting crop health, market demand, or weather patterns based on historical and real-time information.

But the recent surge of interest in AI means that startups and established enterprises alike will be looking for new ways to expand their use of AI in precision agriculture technologies. To this end, the United States Senate Committee on Committee on Agriculture, Nutrition, and Forestry held a hearing in November 2023 to discuss “Innovation in American Agriculture: Leveraging Technology and Artificial Intelligence.”

The Committee explored the risks that will accompany the increased adoption of AI in precision agriculture, how those risks apply to particular use cases and how the federal government is working to address those risks. Drawing from the insights and expertise of the Agribusiness & Food Technology practice at Kilpatrick, we highlight some of the issues raised during the hearing and provide recommendations for mitigating this new landscape of risk.

Dr. Siegmar Pohl and Jordan Glassman, Kilpatrick law firm
Dr. Siegmar Pohl (partner) and Jordan Glassman (associate), Kilpatrick. Image credits: Kilpatrick

Next steps for AI in agriculture and AI risks in farming

During the hearing, several experts made clear that further steps are needed for AI to reach its full potential in agriculture, in particular including improving the quality of the vast amounts of data collected. Aggregating the data and using AI to analyze and apply that data will lead to better, faster and precise solutions.

Even though a lot a data is available, not every farmer can access the data and feed them into reliable decision-making tools. Data sharing initiatives, cooperatives and platforms between farmers could help, and could be promoted by establishing data sharing standards that bolster data aggregation while protecting individual privacy.

Also, not all farmers have access to costly solutions that can exploit their data. In addition to USDA’s conservation programs, additional technical or financial assistance would help farmers implement digital agriculture technologies.

Bias

As intelligent as AI algorithms may appear, they are always a product of the data used to train the algorithms. Typically, AI algorithms are trained using historical or other data representative of the problem space to make probabilistic predictions based on what is already known.

As a result, biases that reflect the training data may creep into the predictions made by AI algorithms. Sanjeev Krishnan, of S2G Ventures cautioned during the hearing that AI systems may perpetuate biases, use non-transparent decision-making processes, and may not be accountable for the outcomes. Because AI algorithms may function like black boxes, a farmer using a precision agriculture product integrated with AI may have no clear insight into the data analysis or predictive processes that generated them.

The biases included in precision agriculture technologies may stem, for example, from the fact that training data are from a certain geography, crop type, seasons, weather, or scale. Consequently, the predictions of AI may not generalize to all situations.

Senator Welch from Vermont emphasized that smaller farmers may be disproportionately affected by bias. AI precision agriculture software trained on data that disproportionately reflects large-scale farming operations may lead to recommendations and optimizations that are not applicable or beneficial to smaller farms. The recommendations could cause economic harm such as reduced crop yields or increased costs for smaller farmers.

Although the developers of those algorithms are well aware of this risk, it cannot always be fully mitigated. To minimize risk, we suggest that users of AI products should be educated on how the products were trained and, in particular, what data was used to train them.

While developers may attempt to limit liability for biased outcomes through liability waivers, legal theories relating to bias in training data are untested.

Developers of precision agriculture software built on AI algorithms should take affirmative measures to minimize bias and to carefully document their reasonable precautions against biased outputs. We also suggest that a starting point for avoiding bias and making data more universally useable may be standards for data collection that build upon the core principles of privacy and security for farm data that were established by the American Farm Bureau Federation in 2014 for companies working in the agriculture data space, ranging from education to liability and security safeguards.

Data privacy and security

Legal issues surrounding the obtaining and use of training data were foremost on the minds of the committee members. Only rarely will the developers of precision agriculture technologies have sufficient training data available to produce AI models that can operate in complete generality.

Consequently, developers must typically purchase or license training data for this purpose. However, training data may include farm operational data or geo-located data. In some cases, AI model output may be sufficiently precise to enable the identification of the source of the training data. Likewise, training data may include information that is partially or entirely protected as a patent or trade secret.

Todd Janzen, president, Janzen Schroeder Agricultural Law. expressed concerns during the hearing that the data farmers collect will eventually not be owned by them, but rather by the provider of the AI-powered systems.

Data is not neatly categorized under existing data protection laws. For instance, the data may not be “personally identifiable” information which would enjoy some protections under existing privacy laws. While some agricultural data may be protected as a trade secret, the legal status of agricultural data is generally untested.

Dr. Jahmy Hindman of Deere & Company indicated that some manufacturers, such as Deere, follow and publicize their principle that all farm data will be controlled by the farmer, including how data is collected, stored, processed and shared. If farmers’ agricultural data is not kept confidential, according to Mr. Krishnan of S2G Ventures, innovation may be discouraged.

For instance, publication of confidential data may constitute a public disclosure of a method that may affect the patentability of that method. On the other hand, data could be aggregated in accordance with data sharing initiatives or cooperatives, which may enable farmers to benefit from using more comprehensive data while anonymizing their own data, thus protecting each farmer’s individual privacy.

While developers and users of AI precision agriculture technologies alike may be exposed to liability under a theory of violation of data protection laws, this is only true today under limited circumstances because agricultural data are typically not protected under current privacy laws. More significant risk lies in the loss of goodwill and potential profits that may result from a lack of trust in the developers of AI driven precision agriculture technologies to obtain, store and use collected data responsibility.

In addition to these problems associated with data confidentiality, cybercriminals are increasingly targeting the food and agriculture sector, in particular grain cooperatives and seed and fertilizer suppliers. A comprehensive approach to data privacy and security must thus examine both the data privacy practices and considerations discussed in this section as well as standard cybersecurity best practices.

As a result, developers should take a privacy-first approach to gain trust while users should be vigilant and require that trust to be earned. Again, industry standards for data collection and usage like the core principles mentioned above could help create trust.

One way to shield against cyberattacks is to engage actively with and reference the Food and Agriculture Sharing and Analysis Center. The Cybersecurity & Infrastructure Security Agency (CISA) provides a valuable resource known as the Food and Agriculture Sector-Specific Plan, which was published in 2015 and needs updating. Layered defenses and zero-trust strategies, e.g., multi-factor authentication (MFA), will help increase data security.

Hallucinations or manipulated results

Some AI systems are vulnerable to generating so-called “hallucinations” or otherwise misleading or wrong results. Mr. Janzen observed that farmers may hesitate to rely on AI systems if it is not clear who would compensate the farmers for damages from system failures, incorrect outputs, or worse, incorrect outputs that are based on bad or imprecise supplier data or information, or a bad actor manipulating internet content that the system pulls.

If an AI system violates the rights of a third party, it is unclear whether the owner of the AI system would be liable or not. For example, AI tools can violate privacy walls that humans cannot, and they may be able to access protected information.

Consider an example of orchard management software that uses an AI model to output precision recommendations for pesticide applications. If the AI model recommends application of a pesticide at a concentration in violation of a government regulation, where does liability lie?

While the software developer, the provider of the training data and the farmer all are implicated, the farmer will likely bear the burden of the violation under existing law. But developers of AI precision agriculture products should anticipate a growing expectation by users that AI products are accurate, especially where products are marketed as accurate.

Developers of precision agriculture technologies and software that rely on AI models perceived as accurate should choose their marketing language with care and set reasonable, bounded expectations on the accuracy of AI model output. Adding too many protections in their legal contracts such as software terms of services (ToS) might slow-down widespread adoption of the AI tools as farmers might be weary of signing complicated warranties and disclaimers. Also, steps could be taken to make this kind of manipulation illegal or create liability for the provider of the AI system for resulting damages.

Government initiatives and regulatory updates

During the hearing, Dr. Hindman of Deere advocated for governmental support and loan programs for the adoption of precision technologies. However, Mr. Janzen cautioned that a large share of farmers fear that the increased sharing of data inherent in precision agriculture technologies using AI may be used by the government as a basis for promulgating new regulations, adding to the administrative and compliance workload of farmers.

Still, legislation and guidance that encourages the establishment of cooperatives, platforms and voluntary standards or principles could help build trust, clarify data ownership and liability for inaccurate outcomes of AI processes.

Join the Newsletter

Get the latest news & research from AFN and AgFunder in your inbox.

Join the Newsletter
Get the latest news and research from AFN & AgFunder in your inbox.

Follow us:

Advertisement
Advertisement
Join Newsletter