In this post, I hope to provide some background, context and initial thinking around Connecterra’s perspective on Gen AI. - Yasir Khokhar, co-founder and CEO at Connecterra
At Connecterra, we’ve predicted this moment since 2015 when we started building an “AI that empowers farmers and their advisors”. AI led empowerment has been the core of our mission and our belief that AI will help us tackle the problems of labor shortages, better decision making and improved sustainability of the agriculture system.
The fundamental technology that powers AI has existed for decades but recent breakthrough of creating AI models that can reason and communicate with humans is the turning point in the space.
Some say, this is the “iPhone moment for AI” and some compare it to the discovery of electricity. But arguably this might be even bigger. The adoption of the consumer facing AI, ‘Chat GPT’, has seen the fastest sign-up to 100M users in the history of the world.
Understandably, the advent of AI is raising questions, concerns and opportunities in equal measure. This is only the beginning and therefore it stands to reason that we address these topics continually.
In this post, I hope to provide some background, context and initial thinking around Connecterra’s perspective. The article is broken down into four key themes.
We first take a brief look at the evolution of Artificial Intelligence to define generative AI and how it came into the picture. We touch upon the factors that enabled the development of existing models and their purpose.
We then discuss how recent advancements in natural language processing have led to AI models that go beyond the original goal of improving human-computer interaction. Some experts believe this progress hints at Artificial General Intelligence, although debates continue. With integration tools made accessible through open-source projects, this tech is becoming more easily applicable in a wide variety of fields.
In the third theme, we address the potential impact this tech will have on the dairy industry. While current language models are trained on human communication, it is also possible to train them on specific knowledge, such as dairy farming. We present two example use cases for the dairy industry: a sparring partner for farmers and a personal analyst for advisors. Finally, we discuss potential challenges, such as data privacy, operating costs and persisting issues with the reliability of responses these models generate.
Finally, we address concerns about the impact of AI on the job market and workforce. While it is difficult to predict how this tech will develop to be adopted in different fields, current research shows that experience-heavy professions such as farming are the least at risk. However, these systems represent valuable tools and will likely bring many advantages to those who know how to use them. Finally, we present our perspective of a way forward, where responsible AI principles are built into the development and operation of these services.
The field of artificial intelligence has been around since the late 1940s and AI research has been working towards mimicking the structure of the human brain in a synthetic, software representation. Mathematically, this should be possible but practically these software systems didn’t work very well until about 10-15 years ago when the field began to have a resurgence.
Generative AI or Gen-AI for short, is the broad term that refers to a category of neural networks that when shown enough examples are able to interpret and generate new variations and responses that are original in their creation. Chat GPT, Dall-E and Midjourney are examples of such neural network systems.
In the last 5 years, the convergence of cloud computing, large volumes of data, improved training techniques and the will to invest in Gen-AI systems has led to major breakthroughs in the field. Large tech companies who could avail of these resources began to invest several years ago. Open AI estimates that it invested over $100M in training its GPT-4 model [i] and had a $10bn investment deal with Microsoft.
The primary goal of these endeavors was to train neural networks that could understand and communicate with humans. Given that they have a large internal representation of neural networks, and they are language models, they are also known as LLMs or Large Language Models.
While the initial goal was to train models that would shift human-computer interaction to natural language, these models have started to generate responses that are unique and insightful. These AIs are also generating new ideas and concepts that they were never ‘trained’ to provide.
They have demonstrated a degree of competence that is as good, or better than most humans. For example, many of these AI models have passed difficult exams that would take humans years of study[ii].
While some researchers and scientists postulate that these models are nothing more than very clever ‘word predictors’, there seems to be consensus that we are seeing sparks of synthetic intelligence[iii], commonly known as Artificial General Intelligence.
At the Dairy Strong conference in Madison, WI a few months ago, I shared the incident of Blake Lemoine, an engineer at Google who had called out his employer for its ‘sentient AI’ that he believed was conscious. While consensus is not unanimous yet, we are close to these AIs passing the Turing Test. This test, at its simplest explanation, tries to measure if a machine is intelligent enough to be indistinguishable from a human.
With the fundamental building blocks in place, many of these projects have been open sourced to the broader tech community[vi]. As a consequence, access to this technology has been democratized with many tools now available that make it easier to build, tune and operate these models. An outcome of this democratization is the ability to integrate these AIs with other digital tools such as databases, productivity applications and our day-to-day digital tools.
At the 2016 Mobile Tech Conference in Rotorua, I spoke specifically about how such an intelligence would impact the dairy industry and empower farmers. Our learnings since then have only strengthened our stance.
Agriculture is a specific body of knowledge, and dairy farming is a highly experiential industry where formal knowledge, training, skills and data are key to running a successful operation. The current wave of LLMs are generalists. Their underlying training is based on human communication, text and visual data scraped from the internet. However, it is possible to train these LLMs with specific knowledge leading to the creation of a dairy-trained AI model that can help make complex decisions.
It is our belief that almost every aspect of the dairy industry will have an AI-driven use case.
Decisions such as breeding, feeding, genetic selection and investments are multi-dimensional and complex. They require data, observation, and context—multi-modal in other words. Farmers will often receive advice related to all these topics and will base their decisions on past experiences or static analysis. However, humans are simply not very good at processing volume of data, and Gen-AI models are very good at it.
AI in farming today has largely been embedded in computer vision or sensor data streams which made these point solutions powerful. What makes Gen-AI in Dairy exciting is that we can now start to apply the technology at ‘human level’ abstractions.
With Ida, our first AI-based product, we simplified the user experience and every farmer on our technology has rated simplicity as one of their top 3 features for Ida. Gen-AI will allow complex calculations to be presented in understandable terms, and we are excited about further simplification for our customers.
Consider the potential of this technology to become a farmers’ sparring partner. It would be able to understand the situation, context, process different scenarios and present outcomes to the farmer for a better decision and understanding. The model would ideally also be able to share its Chain of Thought (CoT) so we could follow along on its decision-making process and then decide the best course of action.
For the system above to be effective, it would need access to farm data and it would also need to be coached by the farmer to their preferred style of management. Additionally, these systems will need to be carefully tested in real world scenarios as the implications of their output could have serious consequences for a farmers’ business.
Advisors struggle with large volumes of data from multiple customers. A personal analyst would enable advisors to offload data crunching to a Gen-AI model. Imagine having summarized information about all customers with key trends and issues neatly available for action, every day!
For Enterprises, Gen-AI provides an enormous opportunity to unlock what I would call ‘precision knowledge’. Enterprises have a vast amount of knowledge about their products and services across regions, countries and customer types. However, this information does not scale or become available to their customers at the point when they need it.
Being able to train LLM’s with organization specific information would be a game changer. This model is being successfully tested within the customer support segment outside of Agriculture and is expected to be one of the top use cases for the technology. Dairy is no different and could lead to a massive productivity gain for the industry at large.
Gen-AI does not mean that other AI technologies are obsolete, especially for enterprise customers looking for high quality data platforms. There is still much room to deploy advanced technology. Using AI to detect anomalies in data, fill gaps and predict outcomes are examples where we don’t need big hammers. What is clear is that we’ve got a larger toolset to solve larger problems.
Data privacy and control has been a topic of discussion in dairy for a while. Gen-AI only makes it more relevant. However, this topic has now become mainstream for all industries. If AI is being trained on specific data, then there need to be mechanisms in place that customers and users can elect to have their data deleted and forgotten from the training datasets. This is a technical challenge but one that also overlaps with responsible AI principles. I believe that we will need to be crystal clear on policies and transparent on implementations. Trust in this tech will play a major role in long term adoption.
Every few hours, a new tool or approach is being developed and released to the public. However, production scale implementations are rare. The very first implementations are likely to come from the tech giants. Microsoft has already announced its Co-Pilot program that will integrate with Microsoft Office 365. Adobe and Google have announced their partnerships as have many other tech companies.
The proliferation of real-world industrial applications may need more time. This is due to several reasons: costs, performance, and the rapid speed of innovation in the underlying technology.
Core models from OpenAI, Microsoft (Chat GPT, Dall-E etc.) and others are in limited user testing. This is a limiting factor for companies to develop applications today. The wait lists are long and the technical performance slow. Running a simple Q&A session with OpenAI’s chat GPT API can take up to 60 seconds for a response*.
This will change in the next 6-12 months. Efforts such as the mosaic initiative have already started to reduce operating and training costs.
Additionally, the current state of the technology is also prone to ‘hallucinations’, or ‘confident bullshit’ responses. Given a question, LLM’s can give inconsistent or simply incorrect responses. They are also prone to being misled by displaying a degree naivety that makes them unsuitable to be trusted for business-based decision making.
Again, we believe that these are transient issues that will be resolved over time. We are also observing that user tolerance for hallucinations is higher than expected. Early implementations in the tech world seem to indicate that the productivity gain of LLM’s surpasses the need for always-consistent, always-reliable software. Perhaps it is only human to trust companions who are charismatic and knowledgeable but may also tend to get things wrong now and then.
*Author’s own experience, your mileage may vary
Will AI replace advisors and farmers?
Many in the tech industry are concerned with the implications of this technology and if it will replace jobs in Agriculture.
Short answer, no. But it will change things. The world economic forum forecasts many agriculture job categories to not only be in high demand but also least disrupted by Gen-AI. A recent study by the university of Pennsylvania has studied the implications of Generative AI and professions such as farming are the least at risk.[v] Interestingly, the most at-risk job categories are highly paid and highly skilled desk-workers.
However, history tells us that we cannot fully control the outcome of what will happen[vi]. The genie is out of the bottle, and it will not go back in. A good analogy: when the automobile first came out, the risk was primarily to the driver. Failed brakes, engines on fire and broken steering columns were regular and often fatal problems. As technology evolved, mechanical safety improved. But we ended up with traffic jams, exhaust fumes and road rage. These ‘second order’ consequences[vii] are unforeseeable before the technology has mass adoption, which leads to mass adaptation. What is clear is that businesses and individuals that relied on horse carts were left behind by those that embraced the horse-less carriage.
Advisors for instance, empowered by AI technology will be faster and have super-powers to predict outcomes for farmers. Their customer relationships will improve leading to key organization goal achievement. Farmers who adopt Gen-AI will be able to get decisions support whenever needed, backed by their own data, leading to higher quality operational decisions with less time required.
As Steven Sinofsky aptly put it in a recent tweet: “It is not unreasonable to ask if what is going on today with ‘aligned’, ‘responsible’ or ‘sparks of AGI’ is a new risk or simply a desire to finally declare those worries real”.
There is a wide spectrum of opinions on the matter. Over 25,000 individuals, including tech behemoths such as Elon Musk are calling for a ‘pause’ on development of the technology. Others such as Yann LeCun (Chief AI scientist at Meta) are pushing for continued engagement. From our point of view, these are healthy debates and to be expected when the nature of the disruption is real.
This disruption will make discussions on topics like data privacy and control more relevant than ever. If AI is being trained on specific data, then there need to be mechanisms in place that customers and users can elect to have their data deleted and forgotten from the training datasets. This is a technical challenge but one that also overlaps with a broader body of work to define responsible AI principles.
In the case of Gen-AI, we believe steady hands and maturity are required to balance the benefits with the risks. An AI-code of ethics, if you will. A code that is built into the development, training, production and operation of AI-driven services, not unlike the three laws of robotics as postulated by Isaac Asimov. This is not new, we already have laws and codes in place that govern our interactions with technology, a starting point is to adopt them.
At Connecterra we always have and always will consider the implications of any technology on the well-being and livelihood of our customers, partners and our planet. We will strive to understand what these disruptions mean for our users and will create transparency on the benefits and potential pitfalls. We seek to augment human capability, not replace it.
We are excited for the potential of the new advances in AI and we are actively developing innovations in this space. We welcome your questions, perspectives, and thoughts during this pivotal time.
[1] [2303.12712] Sparks of Artificial General Intelligence: Early experiments with GPT-4 (arxiv.org)
[1] [2303.12712] Sparks of Artificial General Intelligence: Early experiments with GPT-4 (arxiv.org)
[1] Google “We Have No Moat, And Neither Does OpenAI” (semianalysis.com)
[1] Precautionary Principle – an overview | ScienceDirect Topics
[1] The Technium: The Pro-Actionary Principle (kk.org)