Enlarge image

Boeing from Air Canada: Chatbot as a “separate legal entity”

Photo:

Bayne Stanley/ZUMA Wire/IMAGO

The airline Air Canada has to give a customer a partial refund that a chatbot made on behalf of the company. An arbitration tribunal in the state of British Columbia has now sentenced the company to pay 812 Canadian dollars (559 euros), as the "Edmonton Journal" reported. Previously, the artificial intelligence had promised the customer a refund after purchasing a full-price ticket, contrary to the company's guidelines.

The chatbot's error lies in the detail: As the judgment shows, the customer asked the company about a discount on a short-term booking due to a bereavement - his grandmother had died. A few airlines have such “bereavement fares” and Air Canada is one of them. However, the airline's guidelines stipulate that such a ticket must be booked before departure. The chatbot, on the other hand, advised the customer to first book a ticket at the normal price and to contact the company within 90 days for a refund.

That's what the man did - and that's exactly the payment Air Canada didn't want to give him. What is particularly interesting is the airline's justification for passing all the responsibility onto the aircraft. The ruling states that the airline argues “that it cannot be held liable for information provided by any of its agents, servants or representatives – including a chatbot.” In doing so, the company suggests “that the chatbot is a separate legal entity that is responsible for its own actions.”

»Responsible for all information«

This argument did not convince the court. "It should be clear to Air Canada that it is responsible for all information on its website," said the arbitration tribunal's ruling. It doesn’t matter whether the information comes from a static page or a chatbot.

The legal dispute is also interesting because it raises the aspect of responsibility in connection with the use of artificial intelligence: There are many unresolved questions as to who assumes liability in the event of an AI making an incorrect decision.

Sol