High Availability Farming

Deere & Company is stating that tractors are like aircraft in their complexity. I laughed at the thought of a flying tractor. But apart from the visual joke, the analogy makes Deere’s repair restrictions seem even more absurd.

Modern tractors are full of computers, just like aircraft, cars, and data centers. Putting a computer chip in a tractor doesn’t make it fly like an aircraft but does make it function just as a mainframe computer with multiple controllers and peripherals in a data center. Keeping that equipment up and running is a critical part of design as every farmer, pilot and computer geek knows.

In the computer industry as with aircraft, uptime—aka “High Availability”—is essential. Yet Deere has been building products using hundreds, if not thousands, of parts, any one of which is a single point of failure, and then tying the replacement of those parts to their exclusive control. Every delay created by this service model becomes a critical issue for farmers.

Aircraft are full of redundant sensors for this very reason. Data center storage is “redundant,” “hot swappable,” and “plug and play”—techniques that have allowed for the exceptional uptime that we have come to expect in the air and online. Despite the opportunity to learn from both the airline industry and data center computing, Deere has designed equipment using hundreds of small sensors without building the redundancy needed for producers to keep rolling in the event of a component failure.

When challenged by proposed legislative requirements, Deere tries to minimize their computer repair problem by stating that farmers can already fix 99% of their stuff—even when a single part can take the machine down by 100%. We know from Deere in legislative hearings that their admitted limitations are only on the “digital stuff,” which means that all of the failure and repair problems of computers (the “digital stuff”) are the big hangup for tractor repair.

Deere then argues that farmers can buy everything they need, but in practice, even if a farmer buys all the diagnostic subscriptions, buys the part, and has the tools to install the part, the final step to activate the part is monopolized by Deere. Deere firmware can only be installed and activated by a Deere Dealership tech, using firmware updates created by Deere specific to each serial number, creating the potential for massive delays and manipulation of service availability to reward larger customers.

Nor does Deere inform their customers that many of their computerized parts don’t work as spare parts without the specific intervention by Deere to activate the part in their system. When it comes to repair of farm equipment, if any step in the process is slow—, including simple supply chain shortages, weather, labor shortages, and so forth— – then farmers cannot farm. Waiting on a Deere tech to activate a part can be “harvest is ruined” slow.

If tractors are like aircraft in their complexity, it’s only because of their embedded computers. But data centers solved the uptime problem a long time ago, and they didn’t need to restrict repair to do it. In the early days of corporate computing in the 1960s, data center managers had repair techs on site 24 x 7 with a full suite of repair manuals, parts and tools. This is akin to a farmer having a barn full of everything needed to repair and a Deere tech living in the barn. The next innovation from the 1960s and 1970s was to buy a second full set of everything, which would be activated in the event of the main system failure. This is the same as buying two tractors, or having a stand-by tractor just in case.

Sound familiar? It should. Deere’s repair restrictions are forcing farmers to rely on backup tractors in exactly this way. But data centers moved beyond this system forty years ago, because it’s expensive and imperfect (what if your backup goes down, too?). Among the many innovations from the 1980s that made for easier and more efficient repair was the addition of call-home functions from the equipment itself. If a sensor detected high heat (a common problem) the machine itself could call the service provider with an alert. The repair tech could then be dispatched already knowing the diagnosis and prepared with the part needed. These functions were telephone based, pre-internet, and worked as reliably as Ma Bell could provide.

This innovation let data center manufacturers begin to offer service agreements that were time-specific and bound the provider to arrive within a short time frame, including penalties for late or incomplete repairs. Surely some dealers might benefit from offering similar programs for those willing to pay. The idea that a Deere dealership can simply show up when it is convenient for them seems horrifyingly archaic from a data center or aircraft perspective.

Even with such a simple model to follow, Deere has decided to use the internet for communications, all while knowing that much of farmland is not covered by reliable broadband and that disruptions of internet service would have profound consequences for repair. I suspect the choice of internet communications is based on Deere harvesting operational data for their own purposes and not for service delivery.

High-availability computing is normal and it has not led to theft of IP, lack of safety for technicians or customers, nor any other of the ridiculous excuses being made by OEMs to deliberately keep high-tech farming from fulfilling its potential.

Deere wants to say tractors are like airplanes? Sure, they’ve got complex computer systems. But that’s no excuse for keeping those tractors from flying the friendly skies.