Are we saving lives or saving money?
A company has developed an artificially intelligent hospital advisor.
Allen Plug is a Calvin University-trained Ontologist working for Cycorp, a company that has developed an artificially intelligent hospital advisor. Plug knows how to talk to robots, or – to be more precise – he is an expert in semantic knowledge representation. He’s also Canadian and has lived in Saskatchewan, B.C. and Ontario.
Do you think AI is making our lives better?
AI is really just a software system – a program. It’s not inherently beneficial or detrimental. What matters is how we use it and how we alleviate any detrimental effects. So really the focus should be on policies and regulations. For example, AI requires and uses vast amounts of data, so we need to think carefully about privacy issues.
We also need to think about how we rely on AI judgements. When an AI determines that a person is not eligible for a mortgage, for example, I think such a person should have the right to challenge those results. We should be able to check if the system is relying on incorrect information or whether its basic principles are biased.
We also need to think about how to help people who work in professions that are likely to be replaced by AI systems. I suspect that certain office jobs, like accountants, are in more danger than skilled trades, such as plumbers.
Cycorp’s main product is an AI hospital advisor. Is that about saving lives or mostly about saving money?
Well, both. Our product aims to improve patient throughput, so that when an ER doctor submits an admission order, that patient can be quickly moved to the appropriate unit. The program also tracks the anticipated discharge date and offers recommendations to help achieve it on time. This does save hospitals money and also allows them to see more patients. But it is also good for the patients, since staying beyond what is clinically necessary in a hospital is typically not a good experience.
The Cycorp website says “Cyc is very easy to teach and set up. It’s like training a new employee.” In our lifetimes, will we get used to thinking of AI as fellow employees?
People sometimes, I suspect, think of Star Trek’s Data when they think of AI. But Data is still very much an element of science fiction. There are companies working on human-like androids that will contain AI software, but I doubt we will be tempted to think of them as fellow employees. They will be optimized to perform specific tasks and we will likely view them like we do robot vacuums – useful (and impressive) tools. But they will be too far from being human for us to truly be tempted to view them as fellow employees. Although we will certainly anthropomorphize them as we anthropomorphize everything!