Open
Conversation
deb238a to
64edb1b
Compare
265335a to
4f4d201
Compare
567ad97 to
cd244f7
Compare
cd244f7 to
5a71080
Compare
22cfc16 to
b578488
Compare
7b8a294 to
0f7a83c
Compare
6dabd1b to
7dcd1fb
Compare
710d3c8 to
5dc5272
Compare
00ded73 to
e33543f
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
trigger.dev v4.4.2
Summary
2 new features, 2 improvements, 8 bug fixes.
Improvements
streams.input<T>({ id }), then consume inside tasks via.wait()(suspends the process),.once()(waits for next message), or.on()(subscribes to a continuous stream). Send data from backends with.send(runId, data)or from frontends with the newuseInputStreamSendReact hook. (#3146)Bug fixes
Server changes
These changes affect the self-hosted Docker image and Trigger.dev Cloud:
Two-level tenant dispatch architecture for batch queue processing. Replaces the
single master queue with a two-level index: a dispatch index (tenant → shard)
and per-tenant queue indexes (tenant → queues). This enables O(1) tenant
selection and fair scheduling across tenants regardless of queue count. Improves batch queue processing performance. (#3133)
Add input streams with API routes for sending data to running tasks, SSE reading, and waitpoint creation. Includes Redis cache for fast
.send()to.wait()bridging, dashboard span support for input stream operations, and s2-lite support with configurable S2 endpoint, access token skipping, and S2-Basin headers for self-hosted deployments. Adds s2-lite to Docker Compose for local development. (#3146)Speed up batch queue processing by disabling cooloff and increasing the batch queue processing concurrency limits on the cloud:
Move batch queue global rate limiter from FairQueue claim phase to BatchQueue worker queue consumer for accurate per-item rate limiting. Add worker queue depth cap to prevent unbounded growth that could cause visibility timeouts. (#3166)
Fix a race condition in the waitpoint system where a run could be blocked by a completed waitpoint but never be resumed because of a PostgreSQL MVCC issue. This was most likely to occur when creating a waitpoint via
wait.forToken()at the same moment as completing the token withwait.completeToken(). Other types of waitpoints (timed, child runs) were not affected. (#3075)Fix metrics dashboard chart series colors going out of sync and widgets not reloading stale data when scrolled back into view (#3126)
Gracefully handle oversized batch items instead of aborting the stream.
When an NDJSON batch item exceeds the maximum size, the parser now emits an error marker instead of throwing, allowing the batch to seal normally. The oversized item becomes a pre-failed run with
PAYLOAD_TOO_LARGEerror code, while other items in the batch process successfully. This preventsbatchTriggerAndWaitfrom seeing connection errors and retrying with exponential backoff.Also fixes the NDJSON parser not consuming the remainder of an oversized line split across multiple chunks, which caused "Invalid JSON" errors on subsequent lines. (#3137)
Require the user is an admin during an impersonation session. Previously only the impersonation cookie was checked; now the real user's admin flag is verified on every request. If admin has been revoked, the session falls back to the real user's ID. (#3078)
Raw changeset output
Releases
@trigger.dev/build@4.4.2
Patch Changes
@trigger.dev/core@4.4.2trigger.dev@4.4.2
Patch Changes
@trigger.dev/build@4.4.2@trigger.dev/core@4.4.2@trigger.dev/schema-to-json@4.4.2@trigger.dev/python@4.4.2
Patch Changes
@trigger.dev/sdk@4.4.2@trigger.dev/build@4.4.2@trigger.dev/core@4.4.2@trigger.dev/react-hooks@4.4.2
Patch Changes
Add input streams for bidirectional communication with running tasks. Define typed input streams with
streams.input<T>({ id }), then consume inside tasks via.wait()(suspends the process),.once()(waits for next message), or.on()(subscribes to a continuous stream). Send data from backends with.send(runId, data)or from frontends with the newuseInputStreamSendReact hook. (#3146)Upgrade S2 SDK from 0.17 to 0.22 with support for custom endpoints (s2-lite) via the new
endpointsconfiguration,AppendRecord.string()API, andmaxInflightBytessession option.Updated dependencies:
@trigger.dev/core@4.4.2@trigger.dev/redis-worker@4.4.2
Patch Changes
@trigger.dev/core@4.4.2@trigger.dev/rsc@4.4.2
Patch Changes
@trigger.dev/core@4.4.2@trigger.dev/schema-to-json@4.4.2
Patch Changes
@trigger.dev/core@4.4.2@trigger.dev/sdk@4.4.2
Patch Changes
Add input streams for bidirectional communication with running tasks. Define typed input streams with
streams.input<T>({ id }), then consume inside tasks via.wait()(suspends the process),.once()(waits for next message), or.on()(subscribes to a continuous stream). Send data from backends with.send(runId, data)or from frontends with the newuseInputStreamSendReact hook. (#3146)Upgrade S2 SDK from 0.17 to 0.22 with support for custom endpoints (s2-lite) via the new
endpointsconfiguration,AppendRecord.string()API, andmaxInflightBytessession option.fix(sdk): batch triggerAndWait variants now return correct run.taskIdentifier instead of unknown (#3080)
Add PAYLOAD_TOO_LARGE error to handle graceful recovery of sending batch trigger items with payloads that exceed the maximum payload size (#3137)
Updated dependencies:
@trigger.dev/core@4.4.2@trigger.dev/core@4.4.2