The general network (on art breeder, there are 5) has a tremendously large number (read thousands) of parameters and doesn’t take input images (for example, 120 dog breeds): the system is orders of magnitude more capable than deep dreams (which is interesting in itself).
Uploading images to deep dreams simply iteratively reapplies the network to an input, feeding back an internal layer emphasising characteristics. This may feel like it gives you more power (read artistic control) but the output is far more limited and harder to control than a generator. Since it’s not a true generator network (just inference feedback), providing input makes sense but doesn’t afford much control over the output form (after a few iterations you won’t see the relation to the original).
The generator networks are created by adversarial “evolution” of a detector and generator over a huge, annotated corpus of images. In the case of 4 of the 5 networks (general, album art, landscape and anime portrait) you have no way to use the detector (on the website, the GANs are free/open source), but freedom to set theie (human understandable) parameters. This is likely for a combination of copyright concerns (uploading proprietary imagery from animes, album covers etc) and for confusing (or even offensive) results (e.g the detector saying an image of you can be constructed with a rat, a garbage truck and a lobster).
The portrait GAN exposes the detector meaning you can upload a (portrait form) image of a person (or anything else, with novel but hard to predict or use results). If the picture is “normal” you will see the detected parameters applied to the network (i.e what the network thinks you look like, not the image itself). You can then alter the inputs, cross breed, randomly alter etc.
Overall, if you like deep dreams, this is an entirely different level of neural net image generation technology. It’s worth registering and trying it out, I think you will be amazed at what it can do.