Mojimap is a playground that infers what emoji your are referring to based on what is sees.

There are two main components that power this tool, a lightweight neural network that detects 478 facial points. And a facial blendshape model that uses these detection points to figure out what facial expression is being expressed.

For instance, if the user is smiling, and to what extent. Which eye is closed or open, or if they are squinting. Are the eyebrows pointed downwards?

Both of these models are super lightweight and are actually running client-side on your browser!

If you have any ideas about some next steps for this project, please reach out to me via email↗ or instagram↗.

Made by

TJ Ayoub↗

Version

1.0.0 (Updated 16.09.2024)

References

Tensorflow↗    Google MediaPipe↗    Gesture Recognition↗

what is this?
start

mojimap