Silicon Valley
Open Menu Open Menu
Silicon Valley
  • add addAbout Us
    • add add

      Our Global University System

    • add add

      Enterprise Talent Solutions

  • add addCampus
    • add add

      Faculty

    • add add

      Administration and Staff

    • add add

      Career Opportunities

    • add add

      State Authorization

  • add addStudent Resources
    • add add

      Student Life

    • add add

      Global Learner Support

    • add add

      New Students

    • add add

      Relocating to Silicon Valley

    • add add

      Federal Workers in Career Transition

  • add addAcademics
    • add add

      Align Master of Science in Computer Science

    • add add

      Master of Science in Information Systems, Bridge

    • add add

      Master of Professional Studies in Analytics

    • add add

      Master of Science in Computer Science

    • add add

      Master of Science in Data Science

    • add add

      Master of Science in Artificial Intelligence

    • add add

      Master of Science in Information Systems

    • add add

      Master of Science in Project Management

    • add add

      Graduate Certificate in AI Applications

  • add addAdmissions
    • add add

      Scholarships and Aid

  • add addNews and Events
  • add addContact Us
  1. News & Events
  2. News

Can computer vision help us send better texts? These student researchers think so

July 23, 2025

By Madelaine Millar
Can computer vision help us send better texts? These student researchers think so

EmojiCam uses AI facial recognition to help add emotional cues to text messages

Have you ever sent what you thought was an innocuous text, only for your tone to get wildly lost in translation? Or scrolled through page after page of emojis, searching for a perfect tone marker you weren’t sure even existed? Qinhao Zhang certainly has, and he and his teammates are working on a solution: EmojiCam. 

In the way that some types of video call software can send up clouds of hearts or balloons in response to certain hand gestures, EmojiCam would be able to select an appropriate emoji in response to the user’s facial expression. Zhang’s vision is to allow the feature to be integrated directly into their texting interfaces, so appropriate emojis can be added as the user types. 

Initially curious to explore computer vision and facilitate communication, the journey to develop EmojiCam has led Zhang and his team to a deeper understanding of data qualification, light computing, and privacy protections – and to an award at the Northeastern University in Silicon Valley student research showcase at the end of 2024. 

The idea for EmojiCam originated when Xinmeng Wu, Pingyi Xu, and data science students Anning Tian and Qinhao Zhang met in Dr. Lee’s Computer Vision CS5330 course. The four began working together when they were assigned a final project to demonstrate their grasp on picture processing techniques. 

“I proposed the idea of EmojiCam because miscommunication happens a lot. For example, my ex-girlfriend and I were in a long-distance relationship, so we were usually texting, and we’d have to put emojis after our sentences to show what the real tone was,” said Zhang. “If the camera could analyze your emotion, and the person you’re talking to can analyze your sentence, taken together maybe people could communicate better.” 

So the team began building and training a model that could recognize different human expressions, and match them to an emotion and an emoji. Each member had different interests and specialties, so they split the work up. Zhang researched what kind of model to use and how to go about training it, while Tian – who had a powerful Nvidia graphics card – worked on the training itself. Yi and Wu focused their efforts on the interfaces and integrations required for the phone and web apps that would allow people to actually use EmojiCam. 

But they ran into a problem. 

“There are large challenges in training a model to reach high accuracy recognizing emotions, because human emotional expression is very complicated,” Tian said. He pointed out that variables like age, cultural background, and gender can all impact the facial expressions people use to show their emotions; unbalanced datasets could result in a machine that recognizes certain emotions in certain people, while missing or misidentifying other emotions in others. “Think about all the different ways we have to share that we are happy – it’s already very hard for a human to recognize all the emotions that others have.” 

The team ended up trying several different data sets, and combinations of data sets. They also had to tailor the RAF data set they ultimately landed on, removing images so that each emotion they wanted their model to recognize was represented roughly equally. They found Dr. Lee particularly helpful with this part of the process.

“During the development of the project, Professor Lee would go over all the projects in his class, and talk with each group one by one,” Tian recalled. “He’d give suggestions about how the project should go – how we should use a qualified data set, and how the data side is very important. He brought our attention to the diversity of the data set, and how to qualify it.” 

By the end of the semester, the team had a working model for EmojiCam. When users stood in front of the camera the team set up for the demo display and displayed emotions on their faces, the model would suggest emojis that matched their feeling with roughly 75 percent accuracy. EmojiCam was accepted into Northeastern in Silicon Valley’s fall 2024 Student Research Showcase, where they had the opportunity to share the demo with fellow students, faculty, alumni, and industry partners.

They were gratified by how warmly EmojiCam was received by the showcase attendees, as well as by the third-place award recognizing the high quality of their work. Getting to discuss their project with a range of experts also got the team thinking about new possibilities for their tool. 

“We got a lot of very interesting questions – why do you want to do this project, what is the future about the project? What do you think of the lightweight models and data privacy? That’s all good experience – we know what people care about now,” Tian said. “Coming out of the showcase, we think lightweight models (where processing is all done on the device itself) are the future of EmojiCam, as well as some very important applications. This is especially true for apps that get daily use, because data privacy is very important.” 

Although their computer vision course is over, the team continued to develop EmojiCam independently over the course of the spring semester, excited by the potential demonstrated by their experiences at the student research showcase. Over the course of the spring semester, the team has worked on developing another model based on the AffectNet data set, an even larger database which Professor Lee helped them to acquire, as well as modifying and enhancing the initial model’s architecture and the web and mobile apps used to access it. Zhang found that working on EmojiCam has helped him to identify and hone the skills he’ll need for the career in AI that he wants to pursue.

“I’m really interested in artificial intelligence; I’m hoping for a position as a machine learning engineer or AI developer,” Zhang said. “This project helped me to know what kinds of techniques I should have for those roles – different base models, pytorch, and computer vision can help me a lot.” 

For Tian, the best part was getting to work on EmojiCam as a collaborative learning experience with the scaffolding and support of Northeastern University, rather than a product with the pressure to create an immediate return on investment. 

“If we find a good solution for lightweight model implementation on mobile devices, maybe we can create a startup, but that’s in the very, very far future. Right now, we’re just doing what makes sense, trying to find the solution, getting more experience,” Tian said. “I’m proud – the team was all working together, trying to accomplish the same goal.” 

Related Stories

  • Student researchers head to Las Vegas to attend CES 2026 February 13, 2026
  • Eight student teams pitched real VCs in the middle of finals — and they were on point December 19, 2025
  • TechCrunch Disrupt 2025: Aisha Abdur Rahim Shares Her Experience November 11, 2025

Browse by Category

AlumniBay AreaCampus LifeCollege of EngineeringCollege of Professional StudiesD'Amore-McKim School of BusinessEntrepreneurshipExperiential LearningFacultyKhoury CollegePathway to TechResearchSilicon Valley TeamStudent Spotlight