So two problems I thought about:
- Say you have a neural net you want to send to a friend to use, but you don’t want this friend to steal your model, start improving it and selling it to others. How can you make this model work for the purpose it was made, but be impossible to adapt?
- Alice and Bob meet online and text for awhile. After some time, they grow very fond of each other and decide to take their relationship to the next level. Alas they are both already married to others. But, as sometimes is the case, they just can’t help it and decide to cheat on their respective spouses (spices? spousi?) together.
But first, just to make sure it’s really worth risking a marriage, they want to make sure they’re attracted to each other by exchanging some pictures. But — soon enough they realize there is some risk involved. One might not like the other physically, at which point the party rejected might turn vengeful, with enough information to expose the uninterested party’s identity using their image. So they think about a solution, and naturally come up with this one, they each train a neural net to identify attractive pictures according to their respective tastes, and send each other the models. Luck has it that both models approve, so they send each other the word. But how can Alice or Bob know the other is not just lying? After all they can’t see the picture itself, so how can they know the other one actually tested, instead of just saying they did?
The maybe silly solution I came up with for the first problem is the following: One can add a lot of new architecture to the network, that is indistinguishable from the original one, but sums up to zero (but it would be intractable to identify that it does). Given this model with added parts, it would be very hard to learn anything useful, but its results on given input should yield the same results as if it was the original model.
As for the second problem… let’s see. Let’s call Alice’s model M. Given image x, M returns true iff x is a picture of a handsome man. Then if Alice could somehow construct a hashing function f, and M’, such that M’ accepts f(x) iff M accepts x. Then Bob could just send f(x) and be safe.
One way to make this work is hash all numbers by something called full homomorphic hash function. This is a function that conserves structure of addition and multiplication, so it has the property f(x ⋅ y)= f(x) ⋅ f(y) and f(x) + f(y) = f(x+y). Given such a magical function, we can perform a private computation, checking if in the end the model checks out correctly (for more on this check out https://en.wikipedia.org/wiki/Homomorphic_encryption).
So Alice/Bob can send their encrypted images, and the other side can run the transform model (all numbers in it are run by the same encryption), and test the result, and that’s it.