Artificial Intelligence – for or versus the food system? (Pt. 2)

This article is the second of a two-part discussion of AI, the food system, and Green Tech. You may have read Part One of this article, written by AI itself (ChatGPT in particular). If not, we encourage you go skim it: we simply cannot compete with its clear presentation and concise summary of some pros and cons of applying AI to environmental sectors and ambitions, most particularly, to agriculture and the wider food system.

What we can do is reflect on its presentations of an apparently balanced argument, but with a very human dose of enthusiasm, scepticism, and at times, outright jealousy (it is evident that most copywriters are quaking in their boots about the impacts of AI on their careers, and more imminently, their wages. But we are also very aware of how much slower it is to write this article sans AI support.) Here it is.

Since the March publication of the Open Letter signed by 100s of tech leaders, begging for a pause on giant AI experiments, more and more alarm bells have been sounded. Most recently, Geoffrey Hinton, a pioneer and so-called “Godfather of AI” quit his top Google position to warn against the dangers of the technology – to humanity. Yet, humanity is facing an equally, and we’re sure many would argue, much larger challenge in the climate crisis. Every single industry will be affected in dramatic and destabilising ways, especially those rooted in ecosystems, connected to nature. Agriculture, and the way we produce food for a growing global population is a front-runner. Mustn’t we, therefore, use every tool in our arsenal to mitigate and adapt to a rapidly changing and threatened climate? 

ChatGPT expounded the virtues of applying AI to agricultural systems, particularly its ability to increase efficiency and reduce waste of resources. It comes down to information; by gathering and analysing weather, soil, water, temperature data etc., the tool can support efforts to plant or dose crops with fertiliser, water, and even sun or shade at appropriate times – better, healthier, faster growing plants, less waste.
An example is automatised irrigation – AI sensors analyse when crops need water (through humidity, pH, soil moisture and temperature) and can accordingly water them in a precise manner. Given that 85% of our global freshwater use is used (and often wasted) by the agricultural sector, this could make an enormous difference to water management. Furthermore, a Google project to apply machine-learning algorithms to wind power allowed staff to better meet delivery commitments and improve grid stability, by predicting energy output 36 hours in advance. There are infinite examples of the same principle applied to crop health, weed and pest control, harvest, fertiliser dosing.
Yet, there is one issue that bugs us. AI models analyse weather patterns, to predict and inform what serves agriculture and when. Yet, weather patterns are becoming increasingly (and often wildly), unpredictable. Depending on the historical data, time span, and specific localised impacts of climate change and extreme weather events that are ‘fed into’ the AI as instructive data, the predictions made could be less accurate than proclaimed. This year, the south of Italy has been drenched in continuous, unexpected rain throughout April and May, making much of its landscape healthier and greener than we are accustomed to. Had crops been irrigated based on models from previous, dry years, we cannot help but question whether AI’s analysis would be a help, rather than a hindrance.
The collection and use of agricultural data is, of course, helpful, deeply so. Rather, it’s the automatised application of techniques and resources based on AI’s own data analysis that, to us, poses a risk that must be carefully managed.

Another espoused benefit of AI is, indeed, the automation of food system and agricultural tasks. From monitoring food safety and preventing disease, to increasing food quality (even though ‘quality’ is a subjective, and often overly rigid parameter in the industry), to self-driving tractors, robots that seek out water leaks, tree seed planting that’s 7 times faster than a human and everything in between. An exciting example of this is Blue River’s automatic weed detection and precision spraying. By using AI to detect and target undesired weeds, farmers can reduce the amount of chemicals and artificial pesticides used on their lands. To be very clear – our personal opinions are vehemently against the use of artificial and chemical pesticides or weed killers. But recognising that the global food industry continues and will likely continue to use these chemicals, if AI can support their reduced usage, and in turn, improve soil health, then this appears very positive.
But again, it’s an issue of how tech is instructed. Many natural wine farmers we’ve had the fortune of being educated by embrace wild plants and weeds as fuelling local biodiversity and supporting plants’ growth. Yet, many industrial farms would likely view weeds’, or alternative plants’ growth, as a resource-sucker. And it must also be recognised that industrial agriculture largely prioritises yield: quantity over quality. If AI automatised models are taught to push lands, soils, and crops, to maximum capacity, we cannot be surprised by negative consequences on soil health, long-term fertility, and most importantly, critical biodiversity.

The very automation of agricultural tasks is also something that could, in very select situations, be questioned. We must, of course, as a global (and particularly, wealthy country’s) population, reckon with the realities of labour shortages in agricultural production and the wider food system. Mechanised, automatised support, will be essential to fill the gaps. But we want to make the case for human presence, touch, feel and knowledge in agriculture.
An example: it is widely appreciated throughout viticulture that a manual harvest of grapes is best. For the fruit itself, for the health of the vines, and ultimately, the wine. But manual harvesting is laborious and (without slave labour) costly. Tomatoes are often harvested, or at least collected, with metal tractors. A niche, yet real problem, is that in summer, these tractors’ large metal plates and scoops reach searing temperatures that blister the tomatoes as they gather and hold them. This reduces their nutritional quality.
The case for human knowledge applied to agriculture and nature has been most loudly and justifiably made by indigenous communities around the world. We believe that this knowledge must be conserved. Human knowledge and intervention in agriculture must be supported by AI, rather than framed as incompatible or conflicting with its algorithm. Another ‘red flag’ to guard against.

To turn our attention to ChatGPT’s very own ‘self-reflectively’ declared risks, first being the ‘digital divide’. Smaller, less wealthy farmers are unlikely to have similar access to the AI and green technologies available to large agri-corporations, further widening inequality throughout the (literal) field of food production. This is a concern, particularly on a global scale between more developed and less developed countries. The digital divide, as many environmental issues, should be tackled using government subsidies that support improvements in small-scale, locally governed, sustainable solutions, and encouraging safe open-access technologies.

Yet, there appear to be more pressing issues. ChatGPT acknowledges AI’s potential to reinforce biases in decision-making based on historical data. It refers to social and economic data that perpetuate inequalities. Again, we want to refer to the very real difficulties of acquiring accurate environmental data. The instruction of AI is key, and as many experts have warned, very hard to assure that in the long run, you get it 100% right. The use of AI in the agricultural or food sector is not in itself concerning – what is (and is potentially disastrous) is the use of incorrect AI. We do not pretend to have the technological know-how to even decipher a singular line of AI tech code. We are also certain that the evolutions of and investment into green tech 10 years from now will have built a completely different, hopefully more fool-proof agricultural landscape. But an overreliance on technologies to advise, and even control, resource use – our water, our land, our food – seems worthy of caution.

There are a few other points that ChatGPT negated to mention. AI, as rarely discussed, has its own environmental footprint. As evidenced in Nordgren 2023 (and many others’) peer reviewed scientific paper, the instruction of singular AI models can leave huge carbon footprints. Indeed, the ICT sector’s emissions are projected to hit 14% of global emissions by 2040. “AI…require(s) vast amounts of energy, and at present and in the near future, this energy comes to a large extent from fossil fuels.”
Green AI thus appears to be chasing its own tail – whilst AI should be applied to all sectors to support a reduction in emissions, the development of the technology itself is highly polluting. In fairness, this is not an exclusively AI issue, rather one all sectors face: decarbonisation. Yet, as of today, the ICT sector is somewhat underregulated when it comes to net-zero commitments, when compared to oil and gas, or say, automobile corporations. We can only hope that a booming AI industry will be equally pressured to curtail its environmental footprint.

The last point to reflect upon is one of malice. Concerns thus far described have largely discussed human error, technological error, or, at worst, human greed (i.e., capitalist industrial models of overproduction). But ChatGPT’s self-awareness did not quite stretch to admitting issues of cybersecurity and the dangers of external attacks on AI technologies applied to agricultural and food systems. Whilst this proposition can appear very Hollywood, hacking is a real, and very present difficulty facing all technological sectors. For years companies and governments have made the case for international legislation that challenges cyberwarfare – a ‘Digital Geneva Convention’ so to speak. The risks posed are highly severe.
Immagine the impact on food production or crop growing. Malevolent instruction or manipulation of resource management could create unimaginable disasters for the sector. The risks are further magnified if green AI is applied to public (rather than private) resources. Any opportunity to, for want of a better term, ‘mess with’ AI, here becomes a direct, strategic way to influence a country’s ability to feed and hydrate itself and care for its environment. If many makers themselves of AI (and thus the people who stand profit from it) are crying out for regulation, we must recognise the very real dangers of an abuse of its technology. The environmental, agricultural and food sectors, are just as at risk. 

We are not against AI. In fact, we believe all AI should be green AI. All new technology should now serve the common global goal of decelerating, adapting to and reversing climate change, to restoring biodiversity, reducing waste, and improving systems and structures to allow the human population to live in harmony with the natural environment. AI must be used to improve the environment, agriculture, and our food system. But it must be used with extreme caution, meticulously instructed, and monitored. Green AI should support – not overwhelm or undermine – human efforts to find better, more sustainable, and less impactful ways to feed 8 billion people. Both the risks of misusing, as of not using it, are too great.