Editor’s Note: Houwei Cao is an assistant professor in the Department of Computer Science at New York Institute of Technology. The opinions expressed in this commentary are her own.

Allowing computers to monitor and sense our emotions — rather than just track our everyday habits — seems creepy now. But as technology advances, consumers will grow to appreciate how artificial intelligence that can precisely gauge our thoughts and feelings will make our daily lives easier, with experiences that are more personalized, convenient and attuned to our emotions.

AI is already a big part of everyday life. For example, Starbucks uses AI in its rewards program and its mobile app to track a customer’s orders, the time of day they place it, the weather and more to customize recommendations. Amazon revolutionized retail in part by using customers’ previous purchases to make recommendations about other products.

These efforts are noteworthy, but they barely scratch the surface of how AI could be used to understand our wants and needs. Soon, AI-based customer service won’t just assist humans — it will understand our feelings. With this information, companies can adjust their service to improve the customer experience.

Consider how using AI to evaluate emotions could revolutionize in-person service. Can’t find what you’re looking for in a store? Sensors — such as microphones, cameras or facial scanners — can detect your frustration by analyzing your facial expressions and immediately ping a human or a robot to come help.

Or, imagine you’re antsy about a restaurant’s slow service. At the table, a small AI-equipped computer with the same sensors could evaluate your facial expressions or voice, note your distress, and signal for another employee to come assist. If the computer tagged you as particularly angry, the restaurant could offer a free treat.

This type of AI will also transform shopping online. If you’re scrolling through a website for the perfect outfit, for instance, your computer could use its forward-facing camera to pick up subtle facial cues — like furrowed eyebrows or slight pouts. The site could then use that information, combined with data from your previous browsing behavior, to offer you options you’ll like.

As a data scientist working on refining machines’ ability to detect human emotions, I know these seemingly futuristic technologies are well within reach. I’m currently developing a comprehensive machine-learning model that learns over time and could eventually make machines perform better than a typical store attendant or call center employee. That may seem hard to believe, but machines don’t have common human vulnerabilities like being tired, hungry or overworked.

My AI model will take into account different visual, audio and language cues simultaneously — like tone of voice, body language and rhetoric — to perform an in-depth analysis of people’s emotional states. This data-driven insight could eventually lead to AI that could enable businesses to understand how a customer feels in different situations, even if they know very little about him or her.

The prospect of omnipresent AI scanning faces and listening to voices sounds intrusive. And companies will have to put rigorous security measures in place to protect customers’ information. But overall, consumers will enjoy the kind of service AI will enable. Just look at the popularity of home assistant robots like Amazon’s Alexa. A generation ago, the idea of allowing a machine to monitor our personal conversations would have seemed ludicrous. Now, it’s commonplace. Allowing these same home assistant robots to interpret our visual cues is a logical next step.

History has shown that wariness of new technology fades as its benefits emerge. People constantly evaluate the emotions of customers, colleagues and loved ones to make decisions. Robots simply automate this process. And the more data they have, the better they will be at it.