Conceptually Linked Interpretability

The goal of “Conceptual Link” is to optimize workflows and improve human interpretability.

In essence what I suggest is a conceptual link between;
• model or rather suite of tools
• training: datasets, word embedding & libraries
• use-case potential/scenario.

A great way to explain the idea of “conceptual linking” for virtual assistants would be through this image.

giphy

Cartoon personification of career choices, is a good set to establish conceptually linked interpretability for your models, training process/datasets, and use case scenarios.

It is but a drop in the ocean. The tip of the iceberg if you will.

Also keep in mind this initial set is ideal for NLP – as a platform facilitating customizable iterations of Virtual Assistants (VA).

The idea of conceptual link is not limited to Personification nor VA. Conceptual Links for other use case scenarios (self driving, anlysis engines ect) can be represented as animals, machines or inanimate objects.

This platform and the idea of users customizing and creating VA should facilitate creative expression – as in a VA built from an entirely fictional character or a VA created by user on the front end as if it is a work of art.

Another benefit of a platform built around creating VA is a potential solution for managing deepfakes. In theory a user could record their likeness into a VA for many reasons. One reason would be to establish control and ownership of their likeness. After a double verification process confirms that the user is the real deal run a Linear-Feedback Shift Register ID with non linear encryption tweaks to create a secure fast and efficient cyclic redundancy check for the entire platform.

Before I reach to far ahead.

I share this idea with the hope that this approach will not only improve your workflows but significantly improve presentation of your projects.

Now for the moment of zen.

In the context of a mobile game (with AR features) built around raising, training and battling virtual creatures there are many processes you can leverage to get users to contribute tagged data for machine learning models.
What if you feed the virtual creatures by taking photos of a requested object? Producing sounds of a requested type or nature? Draw requested shapes on the virtual screen?

Suffice it to say these datasets could be partially guided via game design in effort to produce potentially valuable datastreams for machine learning.

How valuable?

I am uncertain. I get that quality over quantity is ideal for initial training runs but for testing or overlapping analysis of an emergent user generated dataset… I don’t understand enough about the way these systems work to determine a value for such a dataset at this time.

In essence the easiest approach would be utilizing the game experience to generate a stream of unique iterations for well classified objects – with random levels of junk data mixed in.

Then there is the training aspect of the game experience which could be designed in some way to provide valuable data to aid computer vision models while being fun for the user at the same time.

Next up is the evolution process which could be designed to generate another machine learning dataset.

Then last but certainly not least is the battle mode. I will leave this one wide open for your imagination to run wild with.

I have several ideas regarding clever use case scenarios for machine learning software, but I need to do more research to determine viability of those ideas while also figuring out where to even post them here…

I look forward to learning more about pytorch.

One more question; for computer vision without 3D spatial mapping changing nothing about a well trained highly accurate image analysis algorithm what would happen if you put it in front of a Mandelbrot fractal zoom?

Food for thought…