Public Models #14 and 15 is now ready for beta testing and we want your feedback!
Moderation
Link
Overview
The Moderation Model identifies and filters out (“moderate”) potentially unwanted content. Specifically, the model identifies the following concepts:
- Explicit: Mostly explicit nudity
- Suggestive: Partial nudity, but not 'safe'
- Gore: Disturbing images of disfigurations, open wounds, and burns from crime scenes
- Drug: Pills, syringes, paraphernalia
- Safe: All images that do not trigger the above 4 concepts
Example result:

Face Embeddings
Link
Overview
The 'Face Embedding' model analyzes images and returns numerical vectors that represent each detected face in the image in a 1024-dimensional space. The vector representation is computed by using our existing 'Face Detection' model and the vectors of visually similar faces will be close to each other in the 1024-dimensional space. The model can be used for organizing, filtering, and ranking images according to visual similarity.
Some use cases:
- Visual search for faces (if they want to build their own, and don’t want us to index their images)
- Custom training = Regressions on top = Linear classification model
- Research experiments using the embeddings
- Perceptual hashing
Give them a test run and let us know what you think!