top of page
By Marina Wang

Eye, robot: Artificial intelligence dramatically improves accuracy of classic eye exam


The classic eye exam may be about to get an upgrade. Researchers have developed an online vision test—fueled by artificial intelligence (AI)—that produces much more accurate diagnoses than the sheet of capital letters we’ve been staring at since the 19th century. If perfected, the test could also help patients with eye diseases track their vision at home.

“It’s an intriguing idea” that reveals just how antiquated the classic eye test is, says Laura Green, an ophthalmologist at the Krieger Eye Institute. Green was not involved with the work, but she studies ways to use technology to improve access to health care.

The classic eye exam, known as the Snellen chart, has been around since 1862. The farther down the sheet a person can read, the better their vision. The test is quick and easy to administer, but it has problems, says Chris Piech, a computer scientist at Stanford University. Patients start to guess at letters when they become blurry, he says, which means they can get different scores each time they take the test.

Piech is no stranger to the Snellen test. At age 10, doctors diagnosed him with chronic uveitis, an inflammatory eye disease. “I was sitting through all these tests and it was pretty obvious to me that it was terribly inaccurate,” he says. He wanted to find a way to remove human error from the Snellen exam, while improving its accuracy.

So Piech and his colleagues developed an online test. Users first calibrate their screen size by adjusting a box on a web page to the size of a credit card. After entering the distance they are from the screen, the test displays an “E” in one of four orientations. Based on the answer, the algorithm then uses statistics to make a prediction for a vision score, similar to how AIs make a playlist based on your favorite artists, or what ads to show based on what you clicked on earlier. As the test goes on, the algorithm is able to make a more accurate prediction about the score. The test asks 20 questions per eye and takes a couple minutes to complete.

When the researchers ran their “Stanford acuity test” (StAT) through 1000 computer simulations mimicking real patients, the diagnostic reduced error by 74% compared with the Snellen test, the team reports this month in the Proceedings of the AAAI Conference on Artificial Intelligence. The simulations work by starting with a known acuity score and factors in the types of mistakes a human might make. It then virtually “takes” the different eye tests in order to compare how accurate they are. The team used this instead of actual patients because it starts with the “true” acuity—something unknown in a human.

You can take StAT yourself at myeyes.ai, although Piech cautions that the test isn’t meant to replace doctor visits just yet.

“It’s definitely helpful,” says Mark Blecher, an ophthalmologist in Philadelphia who’s written opinion articles comparing various eye tests before. Online eye tests aren’t really new, but Blecher commended the clever use of AI in boosting accuracy.

Blecher says for the next step it would be important to consider the circumstances in which the user takes the test. Things like room lighting or screen brightness could affect the scores, he says.

Whether the StAT test will actually replace the Snellen chart is up for debate. Blecher says getting all eye care professionals to agree on a new standard would be “daunting at best” because the status quo can be hard to overcome.

Green is more optimistic. “I think that it would very quickly get adopted by over 80% of ophthalmology practices,” she says. “We really are desperate to have some good way of doing this.”

Watch this video

https://www.youtube.com/watch?v=Z5vxRC8dMvs

Post: Blog2_Post
bottom of page