Getting the most out of PixelPet

Generating images is a lot of fun! But if you’re new to this, it may be helpful to have some guidence. Let’s take a look at the PixelPet plugin and some of it’s options and features!

The PixelPet main panel

This is where you give PixelPet instructions on what kind of image to generate and how to do it.

We’ll cover the basics here, don’t worry if you don’t understand yet. We elaborate on everything below, with examples.

The main PixelPet panel, in dark theme

Prompt: The prompt field is where you specify what you would like to see.
Negative Prompt: This is what you don’t want to see.
Image Strength: How much the input image should weigh in.
Images: How many images to make.
Generate Images: This will start the image generation process.

Some more advanced options:
Settings: To get more control over your generated images.
Model: Determines the style of your images.
ControlNet: Allows you to guide the model.

Bottom bar: This displays the account you’re signed in with and your credits.

Let’s specify what we want to see

Writing a good prompt

It’s good to be specific about the subject. For example: Just entering the word cat would leave a lot of room for interpretation and the result will be quite random.

Being more specific about what you would like to see, will give you results that will resemble your ideas more closely. Consider using a format like this: [subject], [location], [details], [time of day], [type of image] and being specific with each of them. Check out these images generated with prompt ‘cat’ versus images generated with a prompt based on the above format: ‘a long haired fluffy cute cat, at home, sitting in the window sil, next to a plant, garden in de background, golden hour, realistic photography’ .

Examples of a short prompt

Examples of a detailed prompt

Generally speaking, the more detailed prompt results are more desirable. There may still be results that you are unhappy with though. Consider adding more info to the prompt, in this case, maybe cat breed, fur and eye color could be specified but also pose, lighting details or type of camera. Another way to narrow down results is by adding a negative prompt. This allows you to exclude certain things from your image. Once your prompts get more elaborate, take a close look at both your prompt and negative prompt, is there anything missing or are there any conflicting instructions?

Another (less efficient) option to get closer to your desired destination, is to increase the image quantity. This way, you can take advantage of the large random factor that is used to generate images. Making more images, increases the likelihood of getting the perfect shot.

A large canvas or a disproportional base size can lead to issues, try experimenting from a 2048x2048 canvas with a 512 base size and work your way up from there. You can also consider increasing the image quantity. There is a large random factor at play, you can take advantage of this by simply generating more images. This increases the likelihood of ending up with the perfect shot.

Common prompt additions:
hdr, intricate details, hyper detailed, cinematic shot, vignette, centered, 8k uhd, dslr, symmetric face, soft lighting, high quality, film grain, Fujifilm XT3, dramatic, complex background, cinematic, filmic.

These can be applied depending on the subject and look that you want.

Getting your prompt right, is one of the most important factors to getting good results. This requires a bit of practice and playing around with. If you don’t know where to start or if you are not happy with your results, you could browse through a guide like the one Anashel wrote for RPG v4 (‘Fantasy’ model in PixelPet) or take a look at Civitai.com or Lexica.art for inspiration. Creators often specify their prompts with the images and what model they used to generate it. Read more about models below.

If you’re unhappy due to quality issues, check out the settings section below.

And now, what we don’t want to see.

How to use a negative prompt

Negative prompt tells the image generation model what you want to exclude from your renders. To stick with our prior example, you could exclude a specific cat breed or color, by specifying it here. Be specific, don’t write ‘white’ but write ‘white cat’, otherwise you’re telling the model to not use that color at all, which may lead to unwanted results.

Extra limbs
People often write ‘extra limbs’ here. While doing this won’t harm you, it is probably also good to make sure your settings are in order as a more effective method to avoid limb problems.

Just like the normal prompt, you can find inspiration for negative prompts at Civitai.com or Lexica.art.

In our experience, a negative prompt is not nearly as important as writing a good prompt. What are your experiences? Let us know on Discord!

Common negative prompt additions:
3D render, sketch, drawing, low quality, deformed, watermark, blurry.

These can be applied depending on the subject and look that you want.

Add strength 💪🏻

Image Strength allows you to generate images based on another image or part of it.

Optionally you can give PixelPet an image or adjust part of it with a selection. This will then be used to generate new images with. This process is also known as ‘image to image‘ or ‘img2img‘ for short. Read more about adjusting and fixing images below.

The input image can serve abstractly, for instance more as a color palette, or composition, but you can also use it as a solid base for your final image and just add some extra details. Let’s look at some examples of image strength settings and what effect they could have.

Image Strength: 0
This will tell the model to make a new image based only on the prompt. This practice is called ‘text to image‘ or ‘txt2img‘ for short.

Image Strength: 1 - 10
A low settings like this will still tell the model to make a new image, but to take a little inspiration from an input image.

Image Strength: 10 - 50
This is useful to make significant adjustments. Completely change a face or hairstyle for example. Check out ‘What if we want to fix a part of an image below.

Image Strength: 50 - 90
At these numbers, we’re adjusting smaller details. The composition and subject will be very similar to the input image but small details such as jewelry, expression or details in the background may change.

Image Strength: 90 - 100
With a setting this high, very to no changes will be visible compared to the input image.

Fine-tuning in the settings

You can leave these settings at their default state if you’re not sure how to set them. It will start to make more sense the more you experiment with PixelPet.

Base Size:
This will be the size of the smallest stretch of the rendered image. The other side will be scaled proportionally. It is tempting to increase this number to get more details, extra details may come in forms that you may not want. Such as extra extra limbs or entire copies of your subject. It’s generally safe to use a higher Base Size, when your Image Strength is high too, this will result in higher resolution renders.

Prompt Strength:
This determines the weight of the prompt (also known as CFG scale). Generally 7 is good, but if you feel like your images deviate too far from your prompt you can increase it. If you set this too high, you risk getting very sharp lines, over-pronounced details and highly saturated colors. Lower values will give the model more creative freedom to deviate from your request.

Steps:
The number of steps the model takes for noising/denoising in the generation process and has an effect on sharpness of details in your image. We recommend starting off a little low and moving up from there, because this has an impact on the time to generate and credit cost.

Tiling:
Switching this on, will make the generated images tile seamlessly.

Most of these settings depend on the style and purpose you want, as well as the size of the canvas you are working on. The examples on the left are rendered at 512x512px.

A comparison between high and low values for Base size, Prompt strength and Steps

Defining your style with Models

Some results of different diffusion models

Generative models are trained with large data sets to learn how to create new images with the same style or content. Each model is trained differently, some more globally, others specifically to a certain style.

Just click the drop down menu in the ‘Models’ tab of the main PixelPet window, to explore the different options. Most models have a ‘More..’ link that will take you to a webpage with more info and examples of the specific model. This is usually also a pretty good place to find inspiration for prompts.

What if we want a fix a part of an image?

A technique called ’img2img’ may come in handy.

The image you generated can be close to what you want, but there may be some issues. For instance, faces can be slightly (or very) weird or there could be extra limbs or fingers. These are issues that may be due to a base size value that doesn't fit your canvas or selection very well. Other issues you may run into could be details that you would like to adjust, add or remove. Such as age, eye color, jewelry, clothing, a tattoo, etc. We can usually fix all of it with inpainting and the tools Photoshop has to offer.

With PixelPet, img2img is as easy as creating a marquee selection, setting an image strength, and pressing 'Generate Image'. The new image will be generated within the given bounds. You can work with selections of any shape and it is good practice to apply a feather to it. This will make the implementations a lot smoother. You can also increase the selection padding, which will send a larger part of the image around the selection bounds to the server. This allows it to generate the image in context better.

For good img2img, you need to understand how image strength works and experiment with different values of it. You will start to get a feel of where the sweet spot will be for each purpose.

Here is an example of a space marine, which we adjusted with three separate instances of img2img. For each, a selection was made with a feather of 20x on the area we wanted to adjust. The prompt of every adjustment specifies the changes to the area but also the original prompt. Things like “make him younger” won’t work …yet. We’re working on that.

Sometimes additional details appear that we may or may not like. Use layer masks to combine generated images to get the best outcome.

Play around!

The best way to get better at this is to just play around with it.

We hope this helped you get on your way with this article. It’s still subject to improvement and additions will be made. If you have any questions, suggestions or run into any trouble, please reach out to us via mail or on our Discord. We’re happy to hear from you.