Doesn't remotely surprise me to see such technologies coming from Adobe. They are a software service company after all, and have always looked to expand their market and the technologies they can offer to clients.
Adobe also has extensive knowledge in the manipulation processes used by photo editing software because, surprise surprise, they've written and refined a large chunk of them in-house.
Quote
"AI could help fake photo detection be easier, faster, more reliable, and more informative."
End Quote
And of course, one can read the book backwards and predict that the same AI can also be used to create fake photos that will pass detection. All of our largest tech companies are spending lavishly on developing AI systems. As they all search for massive new revenue streams to keep their stock prices growing, we can predict easily from past experience that as much nefarious use as noble use will be the result.
That of course would depend on the nature of the AI used, and the system resources devoted to its development.
If it is a well tuned self learning machine that is given continuous up-time to refine itself and an honest input of test-data, then it would be fairly unlikely that it could be very readily used directly to help create fakes. Any modifications you would do to it would require a huge volume of computing resources to train it fast enough to stay all that far ahead of the detection AI. That is to say, any attempt to use a copy of the AI to make a passing fake would require several orders of magnitude more computing resources to reliably stay far enough ahead of the detection AI that the fake isn't spotted as such soon after production.
The more likely outcome is that it becomes a faster test to detect if a fake/forgery developed out of other research avenues has any identifiable faults. The difference between a section of our reality and a sufficiently advanced model of it are, mathematically speaking, indistinguishable, so there is a ceiling on how well a computer could reliably detect a fake because at some point there isn't enough 'fake looking data' left in an image to say one way or the other. Short of finding the original image or other sources of information beyond the photo in question, an image itself may not contain anything to suggest it might not be real.
Humans have been faking photos from the get go, and there will always be some who will find a reason and the resources to fake photographic 'evidence' to the point that there is no way to identify it as fake from the photo itself. (How many people can identify faked Civil War Era photos from ones which weren't staged, without having been told before hand which had staged elements and which were thought to be real and unaltered scenes? Which is a good reminder that a faked image isn't always one that has been edited in any way.)