I only have a year or so experience doing ML stuff, but it has all been in TensorFlow. I’d like some help expanding my vision to the possibilities allowed (or facilitated by) dynamic computation graphs. I would imagine that you could come up with some crazy network structures that weren’t possible/easy before. Some ideas that I’ve had - these aren’t meant to be good ideas, just different:
- graph-structured neural networks - each node is a few layers, and nodes can conditionally pass messages to each other depending on their output, etc.
- Similarly, state machines of some kind
- networks that change structure partway through - after you get a [batch size, 512, 13, 13] tensor out of a CNN, maybe you could cluster those vectors and choose different fully connected layers.
- pick a random activation function every iteration
- layers that adjust size
- skipping layers
- re-running things through portions of a network
Am I on the right track? What other (potentially wacky) things could I do with dynamic graphs?