After months of trial, a civil court told Air Canada that it must refund a customer who was given incorrect information about refunds by an AI chatbot on the company’s website. The company had argued that the chatbot was a separate legal entity and that it was not responsible for its actions. Jake Moffat, the petitioner in the case, visited Air Canada’s website immediately after his grandmother died, reports Ars Technica. He was unsure about how the airline’s bereavement rates worked so he asked the chatbot on the website to explain. But the chatbot misled him by asking him to book a flight immediately and then to request a refund within 90 days. That was not how it worked. This is what Air Canada’s company policy says about the policy: “Air Canada’s bereavement travel policy offers an option for our customers who need to travel because of the imminent death or death of an immediate family member. Please be aware that our Bereavement policy does not allow refunds for travel that has already happened.” Moffat followed what the chatbot told him to do and took the flight. He then asked Air Canada for a refund, which the company refused. At the time, the company argued that the chatbot’s response had a link to the actual policy, and that meant Moffat should have known bereavement rates could not be requested after the flight. After trying to get a refund for months, Mofatt filed a small claims complaint in Canada's Civil Resolution Tribunal. In court, Air Canada suggested that the chatbot is a separate legal entity that is responsible for its own actions. “This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot,” wrote a tribunal member in the court’s ruling. This case in Canada is the first of its kind, according to Mashable. This could mean that the judgement from the case could have implications for future cases where other companies’ AI-powered chatbots come under the scanner.