Usage Guide
 
What is LoRA?
Why did the image turn out blurry and low-quality?
Horizontal or vertical stripes appear in the images
What should you do if you like the image but it has stripes?
Relevant prompt parts
Upscaling and its main function
Character reference
Why can't I create a full-body character?
Pose / Structure reference
Why couldn't my images be generated?

 

What is LoRA?

LoRA (low-rank adaptation) is a way to quickly fine-tune a large model for specific tasks without completely rewriting it. Instead of modifying the entire model, small adjustments are added to refine its performance. This saves time and resources. With LoRA, you can train a model, such as Flux, to create images in a specific style or solve narrow tasks. For example, we have LoRAs trained to create images in a glamorous style (Glamorous Style v2 or Pink Barbie — premium LoRA) or in a fantasy style (Fantasy Wizard & Witches, Mythic Fantasy Styles, and others).

LoRAs can be combined with each other. Choosing the best combination of LoRAs, you can achieve excellent results. On our website, you can set the lora strength (intencity) of each LoRA as a percentage, determining how strongly it will be applied during image generation.

However, you need to be cautious with LoRAs! One LoRA can override other LoRAs. Using too many LoRAs can distort the prompt and fail to improve the desired result, it might make it even worse. Certain LoRAs should not be set above 50%–70%, if you set 100% or more, the image loses its essence and looks poor and unnatural.

Examples of such LoRAs include: - Midjourney Dreamlike Fantasy
- Radiant Glow · Dreamlike Portrait · Visual Art
- Opalescent Effects

it is advisable to use between 1 and 5 LoRAs. More experienced creators can achieve excellent results even with 10 or more LoRAs. However, it is essential to understand which LoRAs can be combined so that they enhance the image instead of spoiling it.

 

Why did the image turn out blurry and low-quality?

A blurry and low-quality image can result from incorrect LoRAs usage, using too many of them, or setting the effect strength too high (over 100%). With some LoRAs, the image quality can significantly deteriorate even at 70–80% intensity.

 

Horizontal or vertical stripes appear in the images

Stripes can appear due to incorrect use of LoRAs, using too many of them, or setting the effect strength too high (over 100%).  
In some cases, certain LoRAs from the "Details and Midjourney" series can also cause stripes to appear.

 

What should you do if you like the image but it has stripes?

If you like the image but it has stripes, the fastest and most effective way to remove them is to process it through our upscaling feature (1.2x or 2x). To do this, go to the "My Creations" section ➡️ "Images," select the desired image, click the "Upscale" button, choose the desired resulting image size — 1.2x or 2x (2x requires more credits). Then select the upscaling model and click the "Upscale Image" button. If stripes remain after enhancing at 1.2x, try 2x or select another model for upscaling.

 

How to correctly specify the more relevant aspects in the prompt that the model should pay attention to?

In all SDXL and Flux models (Flux Schnell, Shuttle 3.1 Aesthetic, Flux 1.1 Pro Ultra, Flux 1.1 Pro), more relevant text parts can be highlighted using round brackets: (.....), ((.....)), (((.....))). The more brackets you use the more priority it gets. For example: a young woman with (blond hair) holding a ((black fuzzy cat)).

 

Upscaling and its main function

Upscaling is the process of increasing the resolution or scale of an image, video, or other graphical content. As a result of this process, objects are enhanced, becoming sharper and more detailed. The essence of upscaling lies in improving the overall quality of the image. 

You can upscale almost any image on proxima. If you want to upscale the previously generated images on our web site, just open that image in the gallery and press upscale button. If you want to upscale other "external" images, go to "Magic" tab, select the tool "Upscaling". In both cases you will be redirected to our magic upscaler, where you see the selected image or an empty space where you can upload your own image. There are currently 5 AI models used for upscaling. They all have differenct effects and puproses. You can control the resulting resolution while selecting the desired size between 1.2x and 8x. 1.2x is good for image refinement and artefacts removal, other sizes require more credits and take longer time to produce the resulting image. The last option gives you controll over upscaling steps - this is how many upscaling iterations should be done during the upscaling process. The more steps you define the better, but it costs more credits and takes much more time. A good choise would be normally between 2 and 10 steps.

 

Character reference

You can use an existing image with any character as a reference to generate a new image with the same character face. The models used for the character and style transfer are still under development and may not have 100% accuracy but generally produce very good results with the proper settings in our Character reference magic tool. 

To create an image based on your character, select "Character Reference" in our magic tools and upload an image with the character. Please note, the face should be clear and has a high resolution (at least 1024 x 1024). As in the normal image generator, you can write your prompt and apply LoRAs, but do not apply too many of them because they can distort the character and you will not get the results you want. There are also two additional settings: "reference image strength" and "reference image start". The first one lets you specify the strength of the character image. normally keep it at 100%. The second option lets you specify when the character reference should be applied, e.g. 20% means that it should be applied once 20% of the image generation is finished. It can help to modify some properties like hair or eye color that are not inherent to the character. You can also generate more than 1 image, it is better to see different variations and select the best one.

 

Why can't I create a full-body character?

Some LoRAs were trained specifically on faces. Therefore, using these LoRAs makes it extremely difficult to generate a full-body character.

The following LoRAs are among them: - Santa Muerte Midjourney
- Santa Muerte Midjourney v2
- Glamorous Style
- Glamorous Style v2
- Radiant Glow. Dreamlike Portrait • Visual Art
- Pink Barbie
- Dark Fantasy 2
 

If you still want to use one of these LoRAs and generate a full-body character, try reducing the effect strength of the LoRA to 30% or less. Using more than one of these LoRAs will make it even harder to achieve the desired result.  

Additionally, if your prompt specifies "full-body" but describes the face and makeup in detail, the model will most likely generate only the character's face.

 

Pose / Structure reference

This tool allows you to replicate a human pose or even clone the entire structure of a reference image. We have developed a special workflow on our servers that is optimized for the Flux model and is compatible with most LoRAs available on our website.

To clone a pose, select "Pose / Structure Reference" from our magic tools. You can choose a predefined pose from our library by clicking the Pose Library button and selecting either a female or man pose. If you click the "Use Hint" button, the selected pose, along with a sample prompt, will be automatically inserted into the form. If you want to use your own custom image, click the Custom Image button. Please avoid using large images, as they may increase generation time or even cause the process to fail.

We offer three processors — specialized tools that analyze the pose or structure of a reference image and generate a map for further image creation:

- Depth estimates a basic depth map from the reference image. This processor is ideal for recreating compositions, poses, and transforming consistent shapes into new ideas. In most cases, it’s the best choice if you want to preserve the pose or structure while maximizing creative potential.

- Cany detects edges and extracts outlines from your reference image. It generates outlines for characters, backgrounds, and foregrounds, making it especially useful for cloning positions or replicating the structure of objects.

- OpenPose detects human body, hand, facial, and foot keypoints. This processor provides greater flexibility for refining human attributes. It works well with humans and animals, but not all poses are compatible with it!
 

There are two key parameters that control how the referenced pose or structure is applied:

- Reference Image Strength determines how strongly the pose or structure is applied. For poses, the optimal range is between 50% and 70%. To achieve better effects with LoRAs, it’s recommended to keep this value below 70%.

- Reference Image End specifies when the reference application should stop. This means the cloning of the selected pose or structure ceases before the model completes the normal image generation process. This parameter is crucial as it allows you to enhance LoRAs' effects and add extra details. Higher values reduce the impact of LoRAs!
 

The lower the value for Reference Image End you choose, the more freedom you give to the model's imagination and more effects from LoRAs will be applied. The optimal range is between 10% and 60%! THe standard recommended values for reference strength is 60% and for reference end is 40%, this should deliver reliable results for the most cases.

 

Why couldn't my images be generated?

Sometimes you may fail to generate your images. There could be several reasons for that:

- Check your prompt. Do not include the word "prompt"! Also avoid using special characters like smileys, icons or emojis.

- Generally there is currently no censorship while using our models and LoRAs, but some models like Flux Pro and Ultra may reject your input and cancel the generation. Try to change your prompt and remove problematic words.

- Free accounts may stay longer in the queue. If our servers are busy and there are a lot of requests coming from other people it may take some time to process your request. If generation takes more than 10 minutes time it will be cancelled. To avoid this, you can buy a subscription, all paid plans have higher priority.

- During maintenance or server migration there could be some delays and image generation may fail. But we will inform you when it happens!