Releases: roboflow/inference
v0.34.0
🚀 Added
Introducing Stability AI Image Generation Block v1 🖌️✨
Your gateway to limitless creativity with Stability AI! Block leverages Stability AI’s robust API with an easy-to-use interface. Just plug in your Stability AI API key, and you’re ready to go!
Main features:
- Text-to-Image Magic - Generate entirely new images from text prompts in seconds.
- Image Variations Made Easy - Start with an image and let the block transform it into captivating variations. Adjust the influence of your input image with precise control (strength parameter).
- Positive Prompts: Describe what you want to see.
- Negative Prompts: Specify what you don’t want to include.
- Model Selection: Choose from cutting-edge models (core, ultra, sd3) to best suit your creative needs.
- change by @deependujha in #933
💪 Improved
@hansent contributions enhancing Inference documentation!
- auto generate command line docs for CLI
- Http API Referene Docs
- add cli requirements to github workflow for generating docs
- update ui manifest for all blocks
- docstrings for SDK reference docs
- remove outdated active learning docs
⚡ Support for Roboflow Instant Models
Roboflow Instant Models are now supported in Inference! While this feature is part of the broader Roboflow Instant Models initiative, Inference now includes the ability to load these models seamlessly.
Roboflow Instant Models leverage the power of box prompting, utilizing your entire dataset as prompts during inference for enhanced performance and smarter predictions.
- Support for loading Roboflow Instant in Inference added by @grzegorz-roboflow in #929
Other changes
- Added Percent Padding to the Detection Offset Block by @chandlersupple in #956
- Add changes required to effectively index content of batch processing by @PawelPeczek-Roboflow in #942
- Fix the
class_name
alias for keypoint detection in workflows by @shantanubala in #946 - Handle optional workflow_id if passed as part of request to /workflows/run by @grzegorz-roboflow in #954
- Security improvement - avoid passing user value to isfile by @grzegorz-roboflow in #957
- Fix OwlV2.init by @grzegorz-roboflow in #937
- Pass is_preview if available when handling workflow request by @grzegorz-roboflow in #952
New Contributors
- @deependujha made their first contribution in #933
Full Changelog: v0.33.0...v0.34.0
v0.33.0
🚀 Added
Llama Vision 3.2 🤝 other VLMs supported in Workflows
We welcome new block bringing Llama Vision 3.2 to workflows ecosystem!
Llama 3.2 is a new generation of vision and lightweight models that fit on edge devices, tailored for use cases that require more private and personalized AI experiences.
Related changes:
- Fix/onboarding llama 3.2 by @PawelPeczek-Roboflow in #927
- Tests for LLama Vision 3.2 by @PawelPeczek-Roboflow in #928
MQTT Writer Enterprise Workflow Block (added in #930)
This block enables our enterprise users to publish messages to an MQTT broker through Workflows.
MQTT (Message Queuing Telemetry Transport) is a lightweight messaging protocol designed for low-bandwidth, high-latency, or unreliable networks. It's widely used in applications where devices need to communicate with minimal overhead, such as the Internet of Things (IoT).
With this change workflows can communicate with the world through MQTT!
Change introduced by @chandlersupple
Plc EthernetIP Enterprise Workflow Block (added in #905)
This block enables our enterprise users to interface with PLC.
A Programmable Logic Controller (PLC) is an industrial computer specifically designed to automate machinery and processes in manufacturing and other industries. It monitors inputs (e.g., sensors), processes data based on a programmed logic, and controls outputs (e.g., actuators) to perform tasks.
This block is utilizing pylogix
library over Ethernet/IP. Block supports three modes of operation:
- read: Reads specified tags from the PLC.
- write: Writes specified values to the PLC tags.
- read_and_write: Performs both read and write operations in a single execution.
This change brings vision capabilities into real-world industrial plants!
Change introduced by @reedajohns
💪 Improved
Documentation improvements
@yeldarby transforms Inference docs with streamlined navigation, styling, and instant rendering!
- Refresh README
- Update Docs Styling & Nav
- Add Side Nav to Blocks and Kinds Gallery Page
- Add Instant Rendering for Docs
- Docs: Update Links
- Update Landing Page
More contributions enhancing Inference documentation:
- better version handling in generated block pages by @hansent in #922
- Update README.md by @ThatOrJohn in #921
Improvements to CI by @alexnorell
Other changes
- Add env-injectable headers to RF API requests by @PawelPeczek-Roboflow in #932
- Pass roboflow workflow ID as usage_workflow_id if available by @grzegorz-roboflow in #926
- Collect usage after execution of decorated methods by @grzegorz-roboflow in #931
- Improved the SizeMeasurementBlock Docs by @chandlersupple in #916
New Contributors
- @AHB102 made their first contribution in #866
- @ThatOrJohn made their first contribution in #921
Full Changelog: v0.32.0...v0.33.0
v0.32.0
🚀 Added
👀 Gaze detection in Workflows
@joaomarcoscrs (as part of hist first contribution 🏅) introduced Gaze detection model into Workflows.
Don't know what Gaze Detection is?
Gaze detection is a method to determine where a person is looking by analyzing their eye movements and gaze direction. It typically uses cameras or sensors to track eye position and orientation, identifying the point of focus in real time.
It is commonly used in areas like:
- Human-Computer Interaction: Controlling devices with eye movements.
- Behavioral Analysis: Understanding attention and interest.
- Marketing Research: Measuring what catches a person's attention.
Now - you can apply Workflows in such use-cases. Check out Gaze Detection block 📖 documentation to find more information.
Note
The block is currently not supported on Roboflow Hosted Platform. Check out how to send requests to localhost inference
server
🏋️♂️ New experimental Workflows blocks enabling new capabilities
@yeldarby prepared whole series of blocks to open-up new capabilities for Workflows, including:
- Workflows Buffer Block in #894
- Workflows Grid Visualization Block in #895
- Workflow Cache Get/Set Blocks in #893
- Workflows Outlier Detection Block in #896
💪 Improved
Florence 2
runs now up to 3x faster
🧙♂️ @isaacrob-roboflow did some magic 🪄 and now, all of the sudden Florence2
models deployed in inference
could run up to 3x faster 🤯 ❗
See details in #885
🔧 Fixed
Security vulnerability in landing page
We've fixed security issue in inference
server landing page: #890
Issue description
If a Next.js application is performing authorization in middleware based on pathname, it was possible for this authorization to be bypassed.
This issue was patched in Next.js 14.2.15
and later.
Caution
We advise all users of older versions of inference
server to migrate to version 0.32.0
Other fixes
- Add fix for the problem with inference-cli workflows predictions saving by @PawelPeczek-Roboflow in #891
- Improvements in blocks descriptions' by @EmilyGavrilenko (#898) and @casmwenger in (#897)
- Fix usage collector fps by @grzegorz-roboflow in #903
🚧 What's Changed
- Add test to detect blocks with missing init.py by @grzegorz-roboflow in #883
- Cache CLIP Text Embeddings in Workflow Block by @yeldarby in #892
- Allow using video metadata for rate limiter on recorded video by @yeldarby in #887
- Serialized owlv2 model by @probicheaux in #889
- Skip additional test by @PawelPeczek-Roboflow in #902
🏅 New Contributors
- @joaomarcoscrs made their first contribution in #888
Full Changelog: v0.31.1...v0.32.0
v0.31.1
🔧 Fixed
- Fix inference 0.30.0 release by @PawelPeczek-Roboflow in #882 - just we forgot
__init__.py
Full Changelog: v0.31.0...v0.31.1
v0.31.0
🚀 Added
📏 Easily create embeddings and compare them in Workflows
Thanks to @yeldarby, we have Clip Embedding and Cosine Similarity Workflows blocks. Just take a look what would now be possible.
💡 Application ideas
- Visual Search: Match text queries (e.g., "red shoes") to the most relevant images without training a custom model.
- Image Deduplication: Identify similar or duplicate images by calculating embeddings and measuring cosine similarity.
- Zero-Shot Classification: Classify images into categories by comparing their embeddings to pre-defined text labels (e.g., "cat," "dog").
- Outliers detection: Check which images do not match to general trend
✨ gemini-2.0-flash
🤝 Workflows
Check out model card and start using new model, simply pointing new model type in Google Gemini Workflow block 😄 All thanks to @EmilyGavrilenko
🔥 Recent supervision
versions are now supported
For a long time we had issue with not supporting up-to-date supervision
releases. This is no longer the case thanks to @LinasKo and his contribution #881 🙏
🐕🦺 React on changes in Workflows
We have new Delta Filter block that optimizes workflows by triggering downstream steps only when input values change, reducing redundant processing.
📊 Key Features:
- Value Changes Detection: Triggers actions only on value changes.
- Flexibility: Hooks up to changes in numbers, strings, and more.
- Per-Video Caching: Tracks changes using - changes for each video stream or batch element would be traced separately
💡 Use Case:
- Detect changes (e.g., people count) in video analysis and trigger downstream actions efficiently.
🔧 Fixed
confidence
threshold was not applied formulti-label
classification models. @grzegorz-roboflow fixed the problem in #873- Active Learning Data collection finally works for
multi-label
classification models - see @grzegorz-roboflow work in #874 - Fixed
model_id
bug with InferenceAggregator block by @robiscoding in #876 - Security issue:
nanoid
from 3.3.7 to 3.3.8 - see #878 - Fix measurement logic for segmentations in measurement block by @NickHerrig in #872
🚧 Changed
- Improve is_mergeable workflow speed by @grzegorz-roboflow in #846
- Additional DetectionsSelection Operations by @EmilyGavrilenko in #879
- Add Block Copy updates and Spellcheck by @casmwenger in #856
- Updated the OPC UA Writer Block Descriptions and Examples by @chandlersupple in #868
New Contributors
- @casmwenger made their first contribution in #856
Full Changelog: v0.30.0...v0.31.0
v0.30.0
🚀 Added
✨ Paligemma2 support!
Enhanced model support: We’re excited to introduce Paligemma2 integration, a next-generation model designed for more flexible and efficient inference. This upgrade facilitates smoother handling of multi-modal inputs like images and captions, offering better versatility in machine learning applications. Check out the implementation details and examples in this script to see how to get started.
Change added by @probicheaux in #864
Remaining changes
- Unpin
httpx
by @PawelPeczek-Roboflow in #861 - Fix docs builder CI by @grzegorz-roboflow in #863
- Add poison pill to connectionstatechange in webrtc by @grzegorz-roboflow in #862
- Bump version of rich by @PawelPeczek-Roboflow in #867
- Kapa: remove duplicate initialization by @LinasKo in #869
- Add model id output param by @robiscoding in #857
Full Changelog: v0.29.2...v0.30.0
v0.29.2
ultralytics
security issue fixed
Caution
Ultralytics maintainers notified the community, that code in the ultralytics
wheel 8.3.41
is not what's in GitHub and appears to invoke mining. Users of ultralytics who install 8.3.41 will unknowingly execute an xmrig miner.
Please see this issue for more details
Remaining fixes
- python 3.12 support by @hansent in #841
- Pin ultralytics version by @bigbitbus in #858
Full Changelog: v0.29.1...v0.29.2
v0.29.1
🛠️ Fixed
python-multipart
security issue fixed
Caution
We are removing the following vulnerability detected recently in python-multipart
library.
Issue summary
When parsing form data, python-multipart skips line breaks (CR \r
or LF \n
) in front of the first boundary and any tailing bytes after the last boundary. This happens one byte at a time and emits a log event each time, which may cause excessive logging for certain inputs.
An attacker could abuse this by sending a malicious request with lots of data before the first or after the last boundary, causing high CPU load and stalling the processing thread for a significant amount of time. In case of ASGI application, this could stall the event loop and prevent other requests from being processed, resulting in a denial of service (DoS).
Impact
Applications that use python-multipart to parse form data (or use frameworks that do so) are affected.
Next steps
We advise all inference
clients to migrate to version 0.29.1
, especially when inference
docker image is in use. Clients using
older versions of Python package may also upgrade the vulnerable dependency in their environment:
pip install "python-multipart==0.0.19"
Details of the change: #855
Remaining fixes
- Fix problem with docs rendering by @PawelPeczek-Roboflow in #854
- Remove piexif dependency by @iurisilvio in #851
Full Changelog: v0.29.0...v0.29.1
v0.29.0
🚀 Added
📧 Slack and Twilio notifications in Workflows
We've just added two notification blocks to Worfklows ecosystem - Slack and Twilio. Now, there is nothing that can stop you from sending notifications from your Workflows!
slack_notification.mp4
inference-cli
🤝 Workflows
We are happy to share that inference-cli
has now a new command - inference workflows
that make it possible to process data with Workflows without any additional Python scripts needed 😄
🎥 Video files processing
- Input a video path, specify an output directory, and run any workflow.
- Frame-by-frame results saved as CSV or JSONL.
- Your Workflow outputs images? Get an output video out from them if you wanted
🖼️ Process images and directories of images 📂
- Outputs stored in subdirectories with JSONL/CSV aggregation available.
- Fault-tollerant processing:
- ✅ Resume after failure (tracked in logs).
- 🔄 Option to force re-processing.
Review our 📖 docs to discover all options!
👉 Try the command
To try the command, simply run:
pip install inference
inference workflows process-images-directory \
-i {your_input_directory} \
-o {your_output_directory} \
--workspace_name {your-roboflow-workspace-url} \
--workflow_id {your-workflow-id} \
--api-key {your_roboflow_api_key}
Screen.Recording.2024-11-26.at.18.19.23.mov
🔑 Secrets provider block in Workflows
Many Workflows blocks require credential to work correctly, but so far, the ecosystem only provided one secure option for passing those credentials - using workflow parameters, forcing client applications to manipulate secret values.
Since this is not handy solution, we decided to create Environment Secrets Store block which is capable of fetching credentials from environmental variables of inference
server. Thanks to that, admins can now set up the server and client's code do not need to handle secrets ✨
⚠️ Security Notice:
For enhanced security, always use secret providers or Workflow parameters to handle credentials. Hardcoding secrets into your Workflows is strongly discouraged.
🔒 Limitations:
This block is designed for self-hosted inference servers only. Due to security concerns, exporting environment variables is not supported on the hosted Roboflow Platform.
🌐 OPC Workflow block 📡
The OPC Writer block provides a versatile set of integration options that enable enterprises to seamlessly connect with OPC-compliant systems and incorporate real-time data transfer into their workflows. Here’s how you can leverage the block’s flexibility for various integration scenarios that industry-class solutions require.
✨ Key features
- Seamless OPC Integration: Easily send data to OPC servers, whether on local networks or cloud environments, ensuring your workflows can interface with industrial control systems, IoT devices, and SCADA systems.
- Cross-Platform Connectivity: Built with asyncua, the block enables smooth communication across multiple platforms, enabling integration with existing infrastructure and ensuring compatibility with a wide range of OPC standards.
Important
This Workflow block is released under Roboflow Enterprise License and is not available by default on Roboflow Hosted Platform.
Anyone interested in integrating Workflows with industry systems through OPC - please contact Roboflow Sales
See @grzegorz-roboflow's change in #842
🛠️ Fixed
Workflows Execution Engine v1.4.0
-
New Kind: A secret kind for credentials is now available. No action needed for existing blocks, but future blocks should use it for secret parameters.
-
Serialization Fix: Fixed a bug where non-batch outputs weren't being serialized in v1.3.0.
-
Execution Engine Fix: Resolved an issue with empty inputs being passed to downstream blocks. This update ensures smoother workflow execution and may fix previous issues without any changes needed.
See full changelog for more details.
🚧 Changed
Open Workflows on Roboflow Platform
We are moving towards shareable Workflow Definitions on Roboflow Platform - to reflect that @yeldarby made the api_key
optional in Workflows Run requests in #843
⛑️ Maintenance
- Update Docker Tag Logic by @alexnorell in #840
- Make check_if_branch_is_mergeable.yml to succeed if merging to main by @grzegorz-roboflow in #848
- Add workflow to check mergeable state executed on pull request by @grzegorz-roboflow in #847
Full Changelog: v0.28.2...v0.29.0
v0.28.2
🔧 Fixed issue with inference
package installation
26.11.2024 there was a release 0.20.4
of tokenizers
library which is dependency of inference
dependencies introducing breaking change for those inference
clients who use Python 3.8 - causing the following errors while installation of recent (and older) versions of inference
:
👉 MacOS
Downloading tokenizers-0.20.4.tar.gz (343 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details
👉 Linux
After installation, the following error was presented
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1778: in _get_module
return importlib.import_module("." + module_name, self.__name__)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:[101](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:102)4: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:961: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
???
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
<frozen importlib._bootstrap_external>:843: in exec_module
???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/__init__.py:15: in <module>
from . import (
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/mt5/__init__.py:36: in <module>
from ..t5.tokenization_t5_fast import T5TokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py:23: in <module>
from ...tokenization_utils_fast import PreTrainedTokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py:26: in <module>
import tokenizers.pre_tokenizers as pre_tokenizers_fast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/__init__.py:78: in <module>
from .tokenizers import (
E ImportError: /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get
The above exception was the direct cause of the following exception:
tests/inference/models_predictions_tests/test_owlv2.py:4: in <module>
from inference.models.owlv2.owlv2 import OwlV2
inference/models/owlv2/owlv2.py:11: in <module>
from transformers import Owlv2ForObjectDetection, Owlv2Processor
<frozen importlib._bootstrap>:[103](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:104)9: in _handle_fromlist
???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1766: in __getattr__
module = self._get_module(self._class_to_module[name])
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1780: in _get_module
raise RuntimeError(
E RuntimeError: Failed to import transformers.models.owlv2 because of the following error (look up to see its traceback):
E /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get
Caution
We are fixing the problem in inference
0.28.2, but it is not possible to be fixed older releases - for those who need to fix that
in their environments, please modify the build such that installing inference
you also install tokenizers<=0.20.3
.
pip install inference "tokenizers<=0.20.3"
🔧 Fixed issue with CUDA and stream management API
While running inference
server and using stream management API to run Workflows against video inside docker container, it was not possible to use CUDA due to bug present from the very start of the feature. We are fixing it now.
Full Changelog: v0.28.1...v0.28.2