Skip to content

Releases: invoke-ai/InvokeAI

v6.10.0rc2

04 Jan 15:50
56fd7bc

Choose a tag to compare

v6.10.0rc2 Pre-release
Pre-release

InvokeAI v6.10.0rc2

This is the first InvokeAI Community Edition release since the closure of the commercial venture, and we think you will be happy with the progress. This release introduces backend support for the state-of-the-art Z-Image-Turbo image generation models, and multiple frontend improvements that make working with InvokeAI an even smoother and more pleasurable experience.

The Z-Image Turbo Model Family

Z-Image-Turbo (ZIT) is a bilingual image generation model that manages to combine high performance with a small footprint and excellent image generation quality. It excels in photorealistic image generation, renders both English and Chinese text accurately, and is easy to steer. The full model will run easily on consumer hardware with 16 GB VRAM, and while quantized versions will run on significantly smaller cards, with some loss of precision.

With this release InvokeAI runs almost all released versions of ZIT, including diffusers, safetensors, GGUF, FP8 and quantified versions. However, be aware that the FP8 scaled weights models are not yet fully supported and will produce image artifacts. In addition, InvokeAI supports text2image, image2image, ZIT LoRA models, controlnet models, canvas functions and regional guidance. Image Prompts (IP) are not supported by ZIT, but similar functionality is expected when Z-Image Edit is publicly released.

To get started using ZIT, go to the Models tab and from the Launchpad select the Z-Image Turbo bundle to install all the available ZIT related models and dependencies (rougly 35 GB in total). Alternatively, you can select individual models from the Starter Models tab, and search for "Z-Image." The full and Q8 models will run on a 16 GB card. For cards with 6-8 GB of VRAM, choose the smaller quantized model, Z-Image Turbo GGUF Q4_K. Note that when using one of the quantized models, you will also need to install the standalone Qwen3 encoder and one of the Flux VAE models. This will be handled for you when you install a ZIT starter model.

When generating with these models it is recommended to use 8-9 steps and a CFG of 1. In addition to the default Euler scheduler for ZIT, we offer the more accurate but slower Heun scheduler, and a faster but less accurate LCM scheduler. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.

A big shout out to @Pfannkuchensack for his critical contributions to this effort.

New Workflow Features

We have two new improvements to the Workflow Editor:

  • Workflow Tags: It is now possible to add multiple arbitrary text tags to your workflows. To set a tag on the current workflow, go to Details and and scroll down to Tags. Enter a comma-delimited of tags that describe your workflow, such as "image, bounding box", and save. The next time you browse your workflows, you will see a series of checkboxes for all the unique tags in your workflow connection. Select the tag checkboxes individually or in combination to filter the workflows that are displayed. This feature was contributed by @Pfannkuchensack.
  • Prompt Template Node: Another @Pfannkuchensack workflow contribution is a new Prompt Template node, which allows you to apply any of the built-in or custom prompt style templates to a prompt before passing it onward to generation.

Hotkey Editor

@Pfannkuchensack and @joshistoast contributed a new user interface for editing hotkeys. Any of the major UI functions, such as kicking off a generation, opening or closing panels, selecting tools in the canvas, gallery navigation, and so forth, can now be assigned a key shortcut combination. You can also assign multiple hotkeys to the same function.

To access the hotkey editor, go to the Settings (gear) menu in the bottom left, and select Hotkeys.

Bulk Operations in the Model Manager

You can now select multiple models in the Model Manager tab and apply bulk operations to them. Currently the only supported operation is to Delete unwanted models, but this feature will be expanded in the future to allow for model exporting, archiving, and other functionality.

This feature was contributed by @joshistoast, based on earlier work by @Pfannkuchensack .

Masked Area Extraction in the Canvas

It is now possible to extract an arbitrary portion of all visible raster layers that are covered by the Inpaint Mask. The extracted region is composited and added as a new raster layer. This allows for greater flexibility in the generation and manipulation of raster layers.

Thanks to @DustyShoe for this work.

PBR Maps

@blessedcoolant added support for PBR maps, a set of three texture images that can be used in 3D graphics applications to define a material's physical properties, such as glossiness. To generate the PBR maps, simply right click on any image in the viewer or gallery, and select "Filters -> PBR Maps". This will generate PBR Normal, Displacement, and Roughness map images suitable for use with a separate 3D rendering package.

New FLUX Model Schedulers

We've also added new schedulers for FLUX models (both dev and schnell). In addition to the default Euler scheduler, you can select the more accurate but slow Heun scheduler, and the faster but less accurate LCM scheduler. Look for the selection under "Advanced Options" in the Text2Image settings panel, or in the FLUX Denoise node in the workflow editor. Note that the LCM and Heun schedulers are still experimental, and may not produce optimal results in some workflows.

Thanks to @Pfannkuchensack for this contribution.

SDXL Color Compensation

When performing SDXL image2image operations, the color palette changes subtly and the discrepancy becomes increasingly obvious after several such operations. @dunkeroni has contributed a new advanced option to compensate for this color drift when generating with SDXL models.

Bugfixes

Multiple bugs were caught and fixed in this release and are listed in the detailed changelog below.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Read more

v6.10.0rc1

26 Dec 14:41
65efc3d

Choose a tag to compare

v6.10.0rc1 Pre-release
Pre-release

InvokeAI v6.10.0rc1

This is the first InvokeAI Community Edition release since the closure of the commercial venture, and we think you will be happy with the progress. This release introduces backend support for the state-of-the-art Z-Image-Turbo image generation models, and multiple frontend improvements that make working with InvokeAI an even smoother and more pleasurable experience.

The Z-Image-Turbo Model Family

Z-Image-Turbo is a bilingual image generation model that manages to combine high performance with a small footprint and excellent image generation quality. It excels in photorealistic image generation, renders both English and Chinese text accurately, and is easy to steer. The full model will run easily on consumer hardware with 16 GB VRAM, and while quantized versions will run on significantly smaller cards, with some loss of precision.

With this release InvokeAI runs almost all released versions of Z-Image-Turbo, including diffusers, safetensors, GGUF, FP8 and quantified versions. However, be aware that the FP8 scaled weights models are not yet fully supported and will produce image artifacts. In addition, InvokeAI supports text2image, image2image, Z-Image-Turbo LoRA models, controlnet models, canvas functions and regional guidance.

To get started using Z-Image-Turbo, go to the Models tab, select Starter Models, and search for "Z-Image." The full and Q8 models will run on a 16 GB card. For less VRAM, choose one of the smaller quantized models. When generating with these models it is recommended to use 8 steps and a CFG of 1.

A big shout out to @Pfannkuchensack for his critical contributions to this effort.

New Workflow Features

We have two new improvements to the Workflow Editor:

  • Workflow Tags: It is now possible to add multiple arbitrary text tags to your workflows. To set a tag on the current workflow, go to Details and and scroll down to Tags. Enter a comma-delimited of tags that describe your workflow, such as "image, bounding box", and save. The next time you browse your workflows, you will see a series of checkboxes for all the unique tags in your workflow connection. Select the tag checkboxes individually or in combination to filter the workflows that are displayed. This feature was contributed by @Pfannkuchensack.
  • Prompt Template Node: Another @Pfannkuchensack workflow contribution is a new Prompt Template node, which allows you to apply any of the built-in or custom prompt style templates to a prompt before passing it onward to generation.

Hotkey Editor

@Pfannkuchensack and @joshistoast contributed a new user interface for editing hotkeys. Any of the major UI functions, such as kicking off a generation, opening or closing panels, selecting tools in the canvas, gallery navigation, and so forth, can now be assigned a key shortcut combination. You can also assign multiple hotkeys to the same function.

To access the hotkey editor, go to the Settings (gear) menu in the bottom left, and select Hotkeys.

Bulk Operations in the Model Manager

You can now select multiple models in the Model Manager tab and apply bulk operations to them. Currently the only supported operation is to Delete unwanted models, but this feature will be expanded in the future to allow for model exporting, archiving, and other functionality.

This feature was contributed by @joshistoast, based on earlier work by @Pfannkuchensack .

Masked Area Extraction in the Canvas

It is now possible to extract an arbitrary portion of all visible raster layers that are covered by the Inpaint Mask. The extracted region is composited and added as a new raster layer. This allows for greater flexibility in the generation and manipulation of raster layers.

Thanks to @DustyShoe for this work.

SDXL Color Compensation

When performing SDXL image2image operations, the color palette changes subtly and the discrepancy becomes increasingly obvious after several such operations. @dunkeroni has contributed a new advanced option to compensate for this color drift when generating with SDXL models.

Bugfixes

Multiple bugs were caught and fixed in this release and are listed in the detailed changelog below.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.9.0...v6.10.0rc1

v6.9.0

17 Oct 00:09
8b0d880

Choose a tag to compare

This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.

On first run after installing this release, Invoke will do some data migrations:

  • Run-of-the mill database updates.
  • Update some model records to work with internal Model Manager changes, described below.
  • Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.

If you see any errors or run into any problems, please create a GH issue or ask for help in the #new-release-discussion channel of the Invoke discord.

Model Installation Improvements

Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.

Unknown Models

Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.

As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.

If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.

Invoke-managed Models Directory

Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.

As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.

On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.

We understand this change may seem user-unfriendly at first, but there are good reasons for it:

  • This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
  • It reinforces that the internal models directory is Invoke-managed:
    • Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
    • Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
  • It obviates the need to move models around when changing their type and base.

Refactored Model Identification system

Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.

As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.

Model Identification Test Suite

Besides the business logic improvements, model identification is now fully testable!

When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.

Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.

This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.8.1...v6.9.0

v6.9.0rc3

15 Oct 21:20

Choose a tag to compare

v6.9.0rc3 Pre-release
Pre-release

This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.

On first run after installing this release, Invoke will do some data migrations:

  • Run-of-the mill database updates.
  • Update some model records to work with internal Model Manager changes, described below.
  • Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.

If you see any errors or run into any problems, please create a GH issue or ask for help in the #new-release-discussion channel of the Invoke discord.

Model Installation Improvements

Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.

Unknown Models

Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.

As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.

If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.

Invoke-managed Models Directory

Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.

As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.

On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.

We understand this change may seem user-unfriendly at first, but there are good reasons for it:

  • This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
  • It reinforces that the internal models directory is Invoke-managed:
    • Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
    • Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
  • It obviates the need to move models around when changing their type and base.

Refactored Model Identification system

Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.

As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.

Model Identification Test Suite

Besides the business logic improvements, model identification is now fully testable!

When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.

Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.

This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.8.1...v6.9.0rc3

v6.9.0rc2

15 Oct 04:19
34928ee

Choose a tag to compare

v6.9.0rc2 Pre-release
Pre-release

This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.

On first run after installing this release, Invoke will do some data migrations:

  • Run-of-the mill database updates.
  • Update some model records to work with internal Model Manager changes, described below.
  • Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.

If you see any errors or run into any problems, please create a GH issue or ask for help in the #new-release-discussion channel of the Invoke discord.

Model Installation Improvements

Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.

Unknown Models

Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.

As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.

If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.

Invoke-managed Models Directory

Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.

As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.

On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.

We understand this change may seem user-unfriendly at first, but there are good reasons for it:

  • This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
  • It reinforces that the internal models directory is Invoke-managed:
    • Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
    • Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
  • It obviates the need to move models around when changing their type and base.

Refactored Model Identification system

Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.

As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.

Model Identification Test Suite

Besides the business logic improvements, model identification is now fully testable!

When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.

Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.

This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.8.1...v6.9.0rc2

v6.9.0rc1

15 Oct 02:19
1c73c6e

Choose a tag to compare

v6.9.0rc1 Pre-release
Pre-release

This release focuses on improvements to Invoke's Model Manager. The changes are mostly internal, with one significant user-facing change and a data migration.

On first run after installing this release, Invoke will do some data migrations:

  • Run-of-the mill database updates.
  • Update some model records to work with internal Model Manager changes, described below.
  • Restructure the Invoke-managed models directory into a flat directory structure, where each model gets its own folder named by the model's UUID. Models outside the Invoke-managed models directory are not moved.

If you see any errors or run into any problems, please create a GH issue or ask for help in the #new-release-discussion channel of the Invoke discord.

Model Installation Improvements

Invoke analyzes models during install to attempt to identify them, recording their attributes in the database. This includes the type of model, its base architecture, its file format, and so on. This release includes a number of improvements to that process, both user-facing and internal.

Unknown Models

Previously, when this identification fails, we gave up on that model. If you had downloaded the model via Invoke, we would delete the downloaded file.

As of this release, if we cannot identify a model, we will install it as an Unknown model. If you know what kind of model it is, you can try editing the model via the Model Manager UI to set its type, base, format, and so on. Invoke may be able to run the model after this.

If the model still doesn't work, please create a GH issue linking to the model so we can improve model support. The internal changes in this release are focused on making it easier for contributors to support new models.

Invoke-managed Models Directory

Previously, as a relic of times long past, Invoke's internal model storage was organized in nested folders: <models_dir>/<type>/<base>/model.safetensors. Many moons ago, we didn't have a database, and models were identified by putting them into the right folder. This has not been the case for a long time.

As of this release, Invoke's internal model storage has a normalized, flat directory structure. Each model gets its own folder, named its unique key: <models_dir>/<model_key_uuid>/model.safetensors.

On first startup of this release, Invoke will move model files into the new flat structure. Your non-Invoke-managed models (i.e. models outside the Invoke-managed models directory) won't be touched.

We understand this change may seem user-unfriendly at first, but there are good reasons for it:

  • This structure eliminates the possibility of model name conflicts, which have caused of numerous hard-to-fix bugs and errors.
  • It reinforces that the internal models directory is Invoke-managed:
    • Adding models to this directory manually does not add them to Invoke. With the previous structure, users often dropped models into a folder and expected them to work.
    • Deleting models from this directory or moving them in the directory causes the database to lose track of the models.
  • It obviates the need to move models around when changing their type and base.

Refactored Model Identification system

Several months ago, we started working on a new API to improve model identification (aka "probing" or "classification"). This process involves analyzing model files to determine what kind of model it is.

As of this release, the new API is complete and all legacy model identification logic has been ported to it. Along with the changes in #8577, the process of adding new models to Invoke is much simpler.

Model Identification Test Suite

Besides the business logic improvements, model identification is now fully testable!

When we find a model that is not identified correctly, we can add that model to the test suite, which currently has test cases for 70 models.

Models can be many GB in size and are thus not particularly well-suited to be stored in a git repo. We can work around this by creating lightweight representations of models. Model identification typically relies on analyzing model config files or state dict keys and shapes, but we do not need the tensors themselves for this process.

This allows us to strip out the weights from model files, leaving only the model's "skeleton" as a test case. The 70-model test suite is currently about 115MB but represents hundreds of GB of models.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.8.1...v6.9.0rc1

v6.8.1

12 Oct 03:43
b673b2f

Choose a tag to compare

This patch release fixes the Exception in ASGI application startup error that prevents Invoke from starting.

The error was introduced by an upstream dependency (fastapi). We've pinned the fastapi dependency to the last known working version.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.8.0...v6.8.1

v6.8.0

08 Oct 20:51

Choose a tag to compare

This minor release includes a handful of fixes and enhancements.

Fixes

  • When accepting raster layer adjustments, the opacity of the layer was "baked" in.
  • Corrected help text for non-in-place model installation. Previously, the help text said that a non-in-place model install would copy the model files. This is incorrect; it moves them into the Invoke-managed models dir.
  • Failure to queue generations with an error like Failed to Queue Batch / Unknown Error.

Enhancements

  • Added a crop tool. For now, it is only enabled for Global Ref Images.
    • Click the crop icon on the Ref Image preview to open the tool.
    • Adjust the crop box and click apply to save the cropped image for that ref image.
    • To revert, open the crop tool, click Reset, then Apply to revert to the original image.
    • We'll explore integrating this new tool elsewhere in the app in a future update.
  • Improved Model Manager tab UI. Thanks @joshistoast!
  • Keyboard shortcuts to navigate prompt history. Use alt/option+up/down to move through history.
  • Support for the NOOB-IPA-MARK1 IP Adapter. Thanks @Iq1pl!

Internal

  • Support for dynamic model drop-downs in Workflow Editor. This change greatly reduces the amount of frontend code changes needed to support a new model type. Node authors may need to update their nodes to prevent warnings from being displayed. However, there are no breakages expected. See #8577 for more details.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

New Contributors

Full Changelog: v6.7.0...v6.8.0

v6.8.0rc2

08 Oct 06:37

Choose a tag to compare

v6.8.0rc2 Pre-release
Pre-release

This minor release includes a handful of fixes and enhancements.

Fixes

  • When accepting raster layer adjustments, the opacity of the layer was "baked" in.
  • Corrected help text for non-in-place model installation. Previously, the help text said that a non-in-place model install would copy the model files. This is incorrect; it moves them into the Invoke-managed models dir.

Enhancements

  • Added a crop tool. For now, it is only enabled for Global Ref Images.
    • Click the crop icon on the Ref Image preview to open the tool.
    • Adjust the crop box and click apply to save the cropped image for that ref image.
    • To revert, open the crop tool, click Reset, then Apply to revert to the original image.
    • We'll explore integrating this new tool elsewhere in the app in a future update.
  • Improved Model Manager tab UI. Thanks @joshistoast!
  • Keyboard shortcuts to navigate prompt history. Use alt/option+up/down to move through history.
  • Support for the NOOB-IPA-MARK1 IP Adapter. Thanks @Iq1pl!

Internal

  • Support for dynamic model drop-downs in Workflow Editor. This change greatly reduces the amount of frontend code changes needed to support a new model type. Node authors may need to update their nodes to prevent warnings from being displayed. However, there are no breakages expected. See #8577 for more details.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

New Contributors

Full Changelog: v6.7.0...v6.8.0rc2

v6.8.0rc1

17 Sep 04:11

Choose a tag to compare

v6.8.0rc1 Pre-release
Pre-release

This minor release includes a few QoL enhancements.

Enhancements

  • Added a crop tool. For now, it is only enabled for Global Ref Images.
    • Click the crop icon on the Ref Image preview to open the tool.
    • Adjust the crop box and click apply to save the cropped image for that ref image.
    • To revert, open the crop tool, click Reset, then Apply to revert to the original image.
    • We'll explore integrating this new tool elsewhere in the app in a future update.
  • Improved Model Manager tab UI. Thanks @joshistoast!
  • Keyboard shortcuts to navigate prompt history. Use alt/option+up/down to move through history.

Installing and Updating

The Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.

Note: With recent updates to torch, users on older GPUs (20xx and 10xx series) will likely run into issues with installing/updating. We are still evaluating how we can support older GPUs, but in the meantime users have found success manually downgrading torch. Head over to discord if you need help.

Follow the Quick Start guide to get started with the launcher.

If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.

What's Changed

Full Changelog: v6.7.0...v6.8.0rc1