Billing for paid plan organizations will be based on provisioned disk rather than used database space:
Each project starts with 8 GB disk provisioned by default.
The first 8 GB of provisioned disk per project is free, then $0.125 per additional GB.
Charges are prorated down to the hour, which is advantageous for short-lived projects and branches.
Provisioned disk from Read Replicas will also be included in billing.
Enables upcoming features for enhanced control over disk and Postgres parameters.
Timeline
This change will be rolled out to new customers on August 26th, 2024 and will be gradually rolled out to existing customers shortly after.
Changes
We are adjusting our pricing to offer more flexibility and self-serve for developers wanting to tune their disk and Postgres configuration. For example:
Some developers want disks with higher throughput
Some developers want to store more than 1GB of WAL (for tools like Airbyte/PeerDB, or adding more read replicas)
To make this available we will start billing for provisioned disk size (rather than database space used). Previously, costs associated with WAL files were not directly billed but also users could not control change max_wal_size (default is 1GB).
There is no action needed on your end. You will automatically be transitioned to the new billing model throughout the next couple of weeks. In case there is any change in your monthly bill, we will reach out to you proactively with additional information and give you a grace period to decrease your usage.
For customers on the Free Plan, there will be no changes; the total database space remains capped at 500MB. These adjustments only apply to customers on paid plans. The database disk will continue to autoscale when nearing capacity for paid plan customers.
Before
After (August 26th, 2024)
Price
$0.125 / GB
$0.000171 / GB-Hr
Change
We take the average database space used for all projects, independent of how many days/hours you store the files and sum it up.
We will you based on the provisioned disk usage every hour. First 8GB per project are free. Read replicas will also incur disk costs.
Your disk size is based on your database space usage. As a first step, you need to identify current database space usage and reduce it. To see your current database space usage, head over to the built-in “Database” project report. Once you have reduced your database space and want to reduce your provisioned disk, you can upgrade your Postgres version through your project settings to automatically rightsize your disk. For further information around disk management and reducing database space, please refer to our docs.
If your current disk size is >8GB, this is likely going to impact you. Note that this will be gradually rolled out and you will be notified about the concrete impact on your organization and given a 3-month grace period, which gives you time to right-size your disk and minimize the impact of this change.
Further to earlierdiscussions, the threshold for transitioning large databases to use physical backups for their daily backups is being lowered to 15GB in the next few days.
Physical backups are more performant, have lower impact on the db, and avoid holding locks for long periods of time. Restores continue to work as expected, but backups taken using this method can no longer be downloaded from the dashboard.
Over the next few months, we'll be introducing functionality to restore to a separate, new database, allowing for the perusal of the backed up data without disruption to the original project.
Currently, usage data on the invoice breakdown and organization usage page has a 24-hour delay. Starting from August 26th, the usage data will have no more of 1 hour delay for new customers. Afterwards, the changes will be rolled out to existing customer gradually. We're also working on additional improvements to provide better usage insights.
Additionally, we are revamping invoices to provide more detailed breakdowns of usage for enhanced transparency. Due to our new proration of project add-ons and storage down to the hour, you may notice slight variances in your monthly bill. For the majority of line items, you’ll see the project reference and usage on the invoice, which should make it clearer which project allocated the usage/costs.
A few examples:
Compute Hours is broken down per project and the compute credits ($10) is displayed as discount for the compute line item.
Egress is broken down to each project and displays included quota (250GB) and over-age pricing ($0.09/GB)
Realtime Messages line item shows package-based pricing with $2.50 per million.
Moving to hourly usage-based billing for IPv4, Custom Domain and Point-in-time recovery#
We’re moving to billing all project add-ons usage-based and prorated down to the hour at the end of your billing cycle. We're not altering the monthly prices.
Project add-ons are paid upfront. Every time you change an add-on, you immediately pay for remaining time or get credits for unused time. Each change triggers an additional invoice.
We bill you at the end of your billing cycle for the hours you’ve used the project add-ons. No in-between charges, credit prorations or additional invoices.
We're updating how we bill project add-ons (IPv4, Point-in-time recovery, Custom Domain) without changing their monthly prices. This change will be rolled out on August 26th, 2024 for new customers and shortly after for existing customers.
Previously, when you added a project add-on, like IPv4 or PITR, you were immediately invoiced and charged for the remaining billing cycle period. At the start of a new cycle, you paid upfront for the entire month. If you removed an add-on mid-cycle, you received a credit for unused time.
Starting August 26th, you will be billed retrospectively for these add-ons, similar to Compute Hours. There are no more upfront charges, prorated invoices, or credits. You simply pay for the exact hours you use the project add-ons.
Plans (Pro/Team/Enterprise) are still charged upfront and there are no changes to how they are billed.
We’re moving to more granular billing periods. We're not altering the prices or storage quotas. Every customer will benefit from this change, especially short-lived projects and customers using Branching.
After the billing changes on August 26th there would be no change in pricing:
Before
After
Total Usage
4,200 GB
3,024,000 hours
Usage Discount (Pro Plan)
(100 GB)
(74,400 hours)
Billable Usage
4,100 GB
2,949,600 hours
-
-
-
Price
$0.021 / GB
$0.00002919 / GB / hour
Total Cost
$86.10
$86.10
Example 2: Pro Plan Org, active for part of the month#
In this scenario, an Organization is on the Pro Plan with 3 active projects.
Usage
In this scenario, some of the projects are only active for a few days in the month:
Storage
# Days Active
After: GB Hours
Project A
200 GB
2
9,600 (48 hours * 200 GB)
Project B
1,500 GB
15
540,000 (360 hours * 1,500 GB)
Project C
2,500 GB
30
1,800,000 (720 hours * 2,500 GB)
-
-
-
-
Total
4,200 GB
2,349,600 hours
Billing
Currently we charge you for the full 4,200 GB, even though Project A and B weren’t active for the entire month. After August 26th, this scenario will be 22.87% cheaper:
This change should be universally beneficial, but if there is anything that we have missed just let us know and we will make sure we consider it before rolling out this change.
In previous versions of Wrappers, all the foreign data wrappers need to be built into wrappers extension. The develop/test/release cycle is time consuming and fully on Supabase teams. To speed up this process and give more flexibility to community, we're adding Wasm to the Wrappers framework. With this new feature, users can build their own FDW using Wasm and use it instantly on Supabase platform.
Another benefit is because of the improved modularity, each FDW can be updated and loaded individually. New FDWs release will be quicker than before. Also, wrappers extension size won't be bloated as more FDWs added in.
There is no changes from end-users' perspective, all existing native FDWs are still same. The Wasm FDW only brings a new way of developing and distributing FDW.
We’ve introduced the Edge Runtime Inspector, a powerful new feature in the CLI that helps you inspect and debug edge functions more efficiently.
Pull Request
You can now view and abort queries currently running on your database (primary or replica) in the Supabase Studio SQL Editor. This feature gives you greater control and flexibility in managing your queries.
Pull Request
The Logflare to Elastic filebeat backend has been merged. This integration enables log drains to ELK stacks, providing more robust logging and monitoring capabilities.
Documentation
We have published a guide on how to use the Supabase I/O charts to identify when you may need to scale your database, optimize your queries, or spin up a read replica.
Github Discussion
Breaking Change to Supabase Platform Access Control#
On July 26, 2024, Supabase will be making breaking changes to our platform’s access control system. Developer and Read-Only roles will no longer have write access to an organization’s GitHub and Vercel integrations. These changes will not affect existing integrations that are in place.
Github Discussion
Starting June 24, 2024, paused Free Tier projects are restorable for 90 days. There is a grace period where all paused projects will continue to be restorable until September 22, 2024.
Github Discussion
We’ve made significant improvements to our billing system to help you better understand compute pricing. These changes aim to prevent unexpected charges and provide clarity on “Compute Hours.”
Github Discussion
Dribble - Flutter NBA name guess game available for iOS and Android [Website]
EvalHub - an open-source platform for researchers to discover AI evaluation metrics [Website]
SVGPS - Removes the burden of working with a cluster of SVG files by converting your icons into a single JSON file [Website]
CleanCoffee - Lean coffee discussion utility where you can create boards and share with friends [Website]
Rewritebar - Improve your writing in any macOS application with AI assistance. Quickly correct grammar mistakes, change writing styles or translate text [Website]
Supabase HTTP APIs are no longer using DigiCert as the root CA. This should have no impact on the vast majority of environments, as the other CAs in use are essentially universally trusted.
If your client environment only trusts certificates signed by DigiCert, you could be impacted. We're currently using Cloudflare to serve our HTTP APIs, and recommend ensuring that any client environment that only trusts a specific subset of CAs trusts all of the CAs Cloudflare uses.
In an effort to simplify pricing, we are going to remove usage-based billing for the number of Edge Functions in your projects. Instead, we are going for a bigger quota across all plans at no extra costs. We picked the limits to ensure all customers are benefiting from this change.
Free Plan customers can now create 25 instead of 10 functions without the need to upgrade to a paid Plan.
Free Plan
Pro Plan
Team Plan
Enterprise Plan
Before
10 included
100 included, then $10 per additional 100
100 included, then $10 per additional 100
Custom
After
25 included
500 included
1000 included
Unlimited
This change is effective immediately and in case you were previously exceeding the number of included functions on a paid Plan, you will no longer be charged for it.
These breaking changes are rolling out on July 26, 2024 and affects all organizations that have members assigned either the Developer or Read-Only roles.
All Supabase organizations invite users and assign them to one of the following roles as part of membership to an organization:
Owner
Administrator
Developer
Read-Only (available only on Team and Enterprise plans).
Depending on the role, members are authorized to access a specific set of the organization's resources, such as permission to create a new project or change the billing email.
We recently re-evaluated the access that the Developer and Read-Only roles have and decided to implement changes to restrict them on a couple of resources to improve your organizations' security.
On July 26, 2024, we will turn off certain access that the Developer and Read-Only roles currently have to your organization's resources. The following table is to illustrate the breaking changes that will be going into effect:
Option to use a dedicated api schema for your project#
By default, the public schema is used to generate API routes for your database. In some cases, it's better to use a custom schema - this is important if you use tools that generate tables in the public schema to prevent accidental exposure of your data.
The dashboard supports this workflow through 2 options: either at the project creation step under "Security Options", or in the project's API settings after your project has been created. More information about this workflow in our documentation here!
Supabase support for Postgres 13 is being deprecated as of 15th July 2024, and support for it will be fully removed on 15th November 2024.
All Postgres 13 projects should be upgraded to Postgres 15 before 15th November, 2024.
Any projects still on Postgres 13 after the 15th of November 2024 will be automatically upgraded to Postgres 15. If any Postgres extensions or functions are in use that cause the upgrade processtofail, a backup will be taken instead, and the project will be paused.
Postgres 15 comes with numerous features, bug fixes and performance improvements. Check out the announcement blog posts to find out what each version introduces.
Option to disable Data API when creating projects#
You can now opt to disable the Data API when creating a new project under a section called "Advanced Options", such that you will only be able to connect to your database via connection string. Note that this setting can be subsequently updated again in the project's API settings if and when you want to connect your project via the client libraries.
You can now control client access to Realtime Broadcast and Presence by adding Row Level Security policies to the realtime.messages table! Read more about through our documentation here!
Previously, the Table Editor on the dashboard would run a select count(*) from table query to retrieve the number of rows in the table and display it in the editor (this also supports the pagination logic as well). However, understandably this query can be resource intensive and expensive if the table in question is particularly large. As such, we've chucked some optimizations around this logic to only retrieve the exact row count if the table has less than 50k rows, otherwise we'll retrieve an estimate of the row count instead. You'll still be able to get the exact row count but on demand instead.
Greater clarity on costs when creating new projects#
One of our bigger papercuts in terms of billing is customers not understand compute pricing and that they cannot launch unlimited projects for $25 in total per month. Customers also get confused with "Compute Hours" on their bill. The changes aim to alleviate any "surprise" compute charges and serves as kaizen improvement.
Changes involved are only applicable to paid plan organizations, as it's irrelevant for free plan organizations.
Address Table Editor "resorting" of rows when rows are updated and no active sorts applied#
If you've tried updating a table via the Table Editor without an active sort in place, you'd have noticed that the rows seem to re-sort themselves, specifically the row that you were updating. While this is because rows are returned in an unspecified order without a sort clause from the select query, it certainly must've been a confusing UX. We've alleviated this problem by setting a default sort by clause when reading the table via the Table Editor, which will get overriden once you've set a sort via the UI.
This only impacts projects on the Free Plan because projects in any of the paid plans cannot be paused.
Beginning June 24, 2024, we're updating some project pause/restore behavior:
paused Free projects are restorable for 90 days following their pause date
any Free projects paused before June 24 will be able to restore at any point before September 22, 2024 so they have a full 90 days from when this announcement is made
once a project is no longer restorable, the "restore" option is replaced with an option to download the latest logical backup, taken right before the project is paused, and all Storage objects
This change is being made to enable us to maintain high development velocity on the platform. Previously, paused projects could be restored indefinitely. That creates the need for the platform to remain fully backwards compatible with outdated versions of Postgres and associated extensions. The update allows us to provide a reasonable pause/restore window while gaining the ability to evolve the platform.
Supabase underwent Consolidation Month™ to focus on initiatives that improve the stability, scalability, and security of our products. We also have exciting product announcements that we can’t wait to share. Let’s dive in!
We kicked off Consolidation Month (no it’s not actually trademarked) during the month of May. During this time, every product team within Supabase addressed outstanding performance and stability issues of existing features. Here’s a small subset of initiatives and product announcements as part of Consolidation Month:
Auth Launches @supabase/ssr for Better SSR Framework Support#
The newly released @supabase/ssr package improves cookie management, developer experience, and handling of edge cases in various SSR and CSR contexts. We’ve added extensive testing to prevent issues that users experienced with the @supabase/auth-helpers package.
pgvector v0.7.0 Release Features Significant Performance Improvements#
pgvector v0.7.0 introduced float16 vectors that further improve HNSW build times by 30% while reducing shared memory and disk space by 50% when both index and underlying table use 16-bit float. The latest version also adds sparse and bit vectors as well as L1, Hamming, and Jaccard distance functions.
The Edge Functions team has significantly reduced the error rate for functions encountering memory issues by implementing better safeguards. This has greatly minimized errors with the 502 status code. Additionally, status codes and limits are now documented separately.
Dashboard Supports Bigger Workloads as Projects Grow#
The Supabase Dashboard is now better equipped to handle your projects, regardless of their size. We have implemented sensible defaults for the amount of data rendered and returned in the Table and SQL Editors to prevent browser performance issues while maintaining a snappy user experience.
Realtime now emits standardized error codes, providing descriptions of their meanings and suggested actions. This enhancement improves your error-handling code and helps to narrow down whether the issue lies with the database, Realtime service, or client error.
We’ve improved the prompt and output of our RLS AI Assistant by including best practices found in our RLS docs and upgrading to OpenAI’s newest GPT-4o. We’ve also introduced numerous test scenarios to make sure you’re getting the right security and performance recommendations by comparing parsed SQL with the help of pg_query.
You might be wondering what we've been up to the past few weeks when we'd usually have some cadence in our weekly updates with our GitHub discussion updates - the team at Supabase had decided to commit to a month of consolidation by putting our efforts into each of the following pillars: Alerting, Testing, and Scaling. Let's jump right in to see what this means for the Dashboard! 💪🏻
🚨 Alerts: Enabling proactive resolution to issues that users run into#
We want to find out about bugs before you do. In an effort to lower the time between deploying a bug and fixing it, we’ve set up some alerts that will enable us to be more proactively catch bugs, in particular for show-stopping problems (e.g that awful “A client side exception has occurred” screen)!
Added alerts for critical points of failures (PR)
This includes any errors that will completely prevent you from using the Supabase dashboard.
Added alerts for full page client crash events with a custom error boundary (PR)
Having a custom error boundary also allows us to potentially provide contextual hints/resolutions to the problem faced, such as what we did in this PR.
🛠️ Testing: Identifying and covering critical points of failure#
We believe in ensuring that minimally our critical paths are covered by tests as opposed to ensuring our dashboard has 100% test coverage. Anything that might completely block our users from doing what they need to do should be caught as much as possible before reaching production.
Moved to Vitest + Integrating packages to make writing tests easier (PR)
We’ve revamped our current test set up to use Vitest instead of Jest and have integrated Mock Service Workers (MSW) and next-router-mock. These are all to support writing better tests easier, and also to reduce the amount of configuration we need to set up the tests.
Expanded on Playwright for E2E tests (PR)
We previously had a couple of Playwright tests written over in our tests repository that we’ve decided to shift over to our main repository. We’re also in the midst of writing more tests and making them public
📈 Scaling: Supporting bigger workloads as the project grows#
The dashboard should be able to keep up with your projects, no matter how big they grow - and even if unable to, it should never block you, or leave you feeling confused. This effort involved several solutions: gracefully failing if unable to handle large workloads, better observability, or even just better error handling to allow our users to self-service most errors. We were able to get improvements out for several common issues. Check out each individual PR for more information 😄
Rendering large column data within the Table Editor (PR)
We’re now limiting rendering each text/JSON based column data in the table editor at a max of 10kB when initially viewing the table, which should alleviate browser performance issues when rendering tables with large row data. You may load the entire column data on demand instead.
Handling large results within the SQL editor (PR)
A limit parameter will now be added as a suffix to select queries that are run in the SQL editor, which should prevent the browser performance from degrading when trying to query too much data. The limit parameter can be optionally removed if your intention might be to pull data in greater quantity.
View ongoing queries in the SQL editor and support terminating them (PR)
This came in response to a user who accidentally ran a large query through the SQL editor which ended up in the API request timing out, giving the impression that the query stopped running when in fact it was still running on the database in the background. This should help surface such problems and give the tooling required to abort erroneous queries.
Contextual errors when something goes wrong (PR)
While the attached PR was more of a POC, this is the direction we want to move towards to by providing contextual information and suggest possible resolutions whenever users run into an error
Of course consolidation is not a one time event, but an ongoing effort that our team is committing to in parallel to shipping new and exciting features. We hope that as our users, you will all be able to benefit from this with less blockers to keep you from building cool stuff 🙂
As always, if you have any feedback for us - we're just a message away through the feedback widget to the top right corner of the dashboard! We're always listening 👂🏻 (as in reading 👀).
I'm Stojan a member of the Supabase Auth team, bringing some updates about what's next with @supabase/ssr. This is the recommended package that helps you use the Supabase JavaScript client with SSR frameworks such as NextJS, Remix, SvelteKit and others.
We've been quite busy recently gathering feedback, reviewing common complaints and bugs with the package, and using it in the popular SSR frameworks. We've identified a few areas needing improvement and we've already started implementing them.
The package is still on major version 0, indicating its beta status. We plan to move it to major version 1 in the coming months making it the preferred way of using the Supabase JavaScript library in your favorite SSR framework.
First, we'll extract @supabase/ssr's code from the auth-helpers repository into its own. We’re doing this because:
@supabase/auth-helpers-x (like for NextJS) is no longer supported by the team, as we expect people to move to @supabase/ssr.
It's no longer about "auth-helpers," but rather about how you can most effectively and ergonomically use the Supabase Client in various SSR and CSR contexts.
A standalone repo makes it easier for the community to contribute and for us to track issues.
We're going to release a fairly ground-up reimplementation of the package. This should come as version 0.4.0 around mid-June. We've received a lot of signal in the past months from developers and the community about possible improvements for developer ergonomics and better handling for edge cases.
The reason for this change is because @supabase/ssr is really just a thin layer for cookie management on top of @supabase/supabase-js. We've identified some improvements that reduce odd and difficult-to-diagnose behavior. The new implementation will boast over 90% test coverage, including testing for issues that we’ve seen commonly reported so far.
As part of the new implementation, we are changing the API. The old API will be deprecated when we reach v1.0.0. This is to ensure the best possible experience for both developers and users. For most use cases and happy paths, the deprecated API will continue working during the phase-out, but we encourage switching as soon as possible. Once we release v1.0.0, major version 0 will no longer be maintained.
The change in the API is quite simple, so here’s an example of how it will look like. Instead of defining three cookie access methods get, set and remove like so:
// set the cookies exactly as they appear in the cookiesToSet array
}
}
})
Note that for createBrowserClient nothing needs to be done in most cases, it automatically works with the document.cookie API.
The change should be trivial for most SSR frameworks, and we'll be sure to update the guides to instruct you on how to change your code into this new way of accessing cookies.
Thanks for all your feedback! Feel free to ask any questions below!
Log drains is currently private alpha, and is available for Teams and Enterprise customers. We are still firming up the pricing and documentation, however it will likely involve a flat fee and variable egress usage. This will be announced separately through official channels.
We will be supporting Datadog as our initial provider.
If you're ingesting the metrics endpoint into your pre-existing managed infrastructure without using the supabase-grafana app, this change does not affect you. If you're running the supabase-grafana app using the docker-compose mechanism, you are also not affected.
Fly applications launched off the repository between December 10, 2023, and May 16, 2024 are impacted, and will experience:
Fly application being shut down after periods of inactivity of a few minutes (default Fly.io behaviour)
Historical data will not be persisted after such a shutdown
The fix to the deployment instructions ensures that a persistent volume is created to store the data on, which prevents loss in case of a machine shutdown or restart. Additionally, autoshutdown behaviour is disabled, in order to prevent the app from being paused due to inactivity.
In order to fix an existing, already deployed Fly application, you can edit its configuration to disable auto_stop_machines, and create a persistent volume, and mount it at /data (similar to the updated deployment instructions). Please note that as the newly created persistent volume will be empty to start, any existing metrics data will not be preserved as part of this change. If doing so is necessary, you can initially mount it at a separate path, copy the data over, and finally mount it at /data.
This was previously behind a feature flag but we're now making this available by default, which will replace the existing single prompt UI that you saw previously at the top of the SQL editor. Once again, thank you all so much for the feedback that you've left us - we really appreciate them and they definitely do help in guiding us towards the ideal dashboard experience for everyone. 🙂🙏
We're also aware that the feature preview functionality is missing in the local set up - rest assured we're looking into it and hope to get a fix out soon for everyone!
A step towards slightly more contextual error messages#
A topic that came up in one of our discussions internally was regarding self-serviceability, and we realised that our error messages could do a much better job than just informing users what the error is about - especially when their errors from Postgres directly and the messages could be slightly cryptic for those not familiar with Postgres (yet 😉). The PR linked here is just a small idea and example for what we plan to do with error messages in the future, by giving users more context about the errors like possible solutions and links to relevant documentation. Hopefully this will make using the dashboard slightly more easier 🙂
Day 1 - Supabase is officially launching into General Availability (GA)#
Supabase has moved to General Availability (GA) with over 1 million databases under management and over 2,500 databases launched daily. We’ve been production ready for years and now we are fully confident that we can help every customer become successful, from weekend projects to enterprise initiatives at organizations like Mozilla, 1Password, and PwC.
Day 2 - Supabase Functions now supports AI models#
Supabase Functions has added a native API that makes it easy to run AI models within your functions while removing nasty cold starts. You can use the gte-small embedding model to generate text embeddings or bring your own Ollama server to tap into many more embedding models and Large Language Models (LLMs) like Llama3 and Mistral. Soon we’ll provide hosted Ollama servers so you won’t have to manage them yourselves for a more seamless experience.
Day 3 - Supabase Auth now supports Anonymous sign-ins#
Supabase Auth heard your requests and went to work building anonymous sign-ins which enable you to create temporary users that have yet to sign up for your application. This lowers the friction for visitors to use your application while making it easy to convert them to permanent users once they’re hooked.
Day 4 - Supabase Storage now supports the S3 protocol#
Supabase Storage already has standard and resumable uploads and now supports the industry standard S3 protocol enabling multipart upload and compatibility with a myriad of tools such as AWS CLI, Clickhouse, and Airbyte for a wide array of use cases.
Supabase has managed over 1 million databases over the last four years and has seen all manner of use cases with common pitfalls that we’re helping our customers address with our Security, Performance, and Index Advisors. These Advisors will help to surface and fix insecure database configurations and recommend database and query optimizations to keep your database secure and performant for your mission critical workloads.
We are delighted that so many high quality projects were submitted but in the end there could only be one Best Overall Project. The decision wasn’t easy but the Supabase panel of judges chose vdbs (vision database SQL) for the honorific. Congratulations 👏 to @xavimonp who will receive the prize of Apple AirPods.
You can now use JSR packages in your Edge Functions. JSR is a modern package registry for JavaScript and TypeScript created by the Deno team. With JSR support, you can use the latest versions of popular Deno packages like Oak.
How to use:
import { Application } from "jsr:@oak/oak/application";
import { Router } from "jsr:@oak/oak/router";
For local development, you will need to update Supabase CLI for the version v1.166.1 or above.
Edge Functions also supports using NPM and deno.land/x packages. If you are already using them, no changes are needed.
We've been focusing on improving existing features on the dashboard and fixing some issues over the past week, so while we've got nothing shiny to shout out about, here's still a list of things that we've shipped! 🙂 As always, feel free to let us know if there's something that you guys really want to see in the dashboard - we'll see how we can make it happen 😉
Supabase Auth now supports anonymous sign-ins, which can be used to create temporary users who haven’t signed up for your application yet! This lowers the friction for new users to try out your product since they don’t have to provide any signup credentials.
Supabase Storage is now officially an S3-Compatible Storage Provider, and now you can use any S3 client to interact with your buckets and files: upload with TUS, serve them with REST, and manage them with the S3 protocol.
We've added support for data wrappers with Auth0, Cognito, Microsoft SQL Server, and Redis! Connect to these external data sources and query them directly from your database.
An option to submit a request to delete your account#
If comes the day that you'd no longer want to use Supabase anymore (hopefully not!) and want to be removed from our systems entirely, feel free to submit a request to delete your account through the account preferences page.
We’re making a Special Announcement on April 15th with a few more surprises throughout the week. Claim your ticket today so you don’t miss out and enter for a chance to win a set of AirPods Max.
We’ve increased the Supavisor client connection limits, the number of concurrent clients that can connect to your project’s pooler, for projects on Small, Medium, Large, and XL compute instances while pricing remains unchanged.
Conversational AI assistant now available in SQL Editor#
Introducing a conversational AI assistant in the SQL Editor to help you write and iterate on your queries. This is currently under a feature preview and can be enabled with instructions here.
Supavisor pooler port 6543 is transaction-mode only#
We’re simplifying Supavisor connection pooler ports and modes so that port 6543 is only transaction mode and port 5432 continues to be only session mode. If you have pool mode set to session we recommend you switch to pooler port 5432 and set the mode to transaction.
You may have noticed improved performance from your database over the last couple of weeks. We made some architectural changes to free up resources for your Postgres instance by removing Storage, Realtime, and Pgbouncer from your instance and each are replaced with an equivalent multi-tenant solution, including our new Supavisor connection pooler.
Implementing semantic image search with Amazon Bedrock and Supabase Vector#
In this post we'll be creating a Python project to implement semantic image search featuring Amazon Bedrock and Amazon Titan’s multimodal model to embed images and Supabase Vecs client library for managing embeddings in your Supabase database with the pgvector extension.
This post explains how authorization works for Realtime Broadcast and Realtime Presence.
This allows you (the developer) to control access to Realtime Channels. We use Postgres Row Level Security to manage access. Developers create Policies which allow or deny access for your users.
Do not forget that RLS policies can use other tables in them so this will give you all the flexibility you need to better fit your use case but be aware of the performance impact of heavy RLS queries or non-indexed fields.
This library as changed the configuration settings to add private: true on channel connect to determine if the user will be connecting an RLS checked channel.
To achieve RLS checks on your Realtime connection we created a new table in the realtime schema to which you will be able to write RLS rules against it to control your topics extensions.
You won’t see any entries recorded in this table as we rollback the changes made to test out RLS policies to avoid creating clutter in your database.
Supavisor, Supabase's multi-tenant connection pooler deployed to regional clusters, became production ready back in December 2023. You can read the announcement here.
Since then, we've migrated Supabase projects from PgBouncer, single tenant connection pooler deployed to the project's instance, to Supavisor.
However, we kept the previous client connection limits from PgBouncer during the transition across all compute instances.
Today, we're happy to announce that we've increased this limit for compute instances Small, Medium, Large, and XL so your projects can take advantage of additional client connections while pricing remains unchanged. These new limits have already been applied to all existing projects and any new projects spun up.
Here's a quick breakdown:
Compute Size
Previous Client Limits
New Client Limits
Small
200
400
Medium
200
600
Large
300
800
XL
700
1,000
For a more complete breakdown of your compute instance resources head over to the Compute Add-ons page.
We've been looking into improving the UX for the RLS policy UI after going through feedback of the community's struggles with RLS in general, and this is the next step that we're taking to streamline the UX.
What we're calling as a "hybrid" editor (for now), you'll be able to see the corresponding SQL query for creating or updating your RLS policies while you're editing the policy via the input fields. And if you'd like even greater control, there's always the "Open in SQL Editor" button as an escape hatch where you can edit the SQL query in its entirety.
Templates are now right beside the editor as well, so you no longer have to click back and forth between templates and the editor.
We've always seen the dashboard as more than just a database adminstration tool, but also potentially an educational platform for developers to pick up the SQL language as they build out their database, and we hope that the changes here will help make that even easier.
Connection pooler on port 6543 is set to transaction mode permanently#
Previously, connection pooler's port 6543 can be set to either transaction or session mode under your project's database settings. This change makes it easier to distinguish between pooler modes and ports by only enabling transaction mode on port 6543 while maintaining session mode on port 5432.
If your using port 6543 and your project's pooler mode is transaction then you won't be able to set the mode to session. You can use port 5432 for session mode.
If your using port 6543 and your project's pooler mode is session then we strongly advise that you use port 5432 for session mode and change the mode to transaction. Once this setting is saved you won't be able to set session mode on port 6543.
In our previous platform architecture, our Storage, Realtime, and connection pooler (PgBouncer) services were bundled together, with a single instance of each service per project.
For our v2 architecture, we’ve “unbundled” these services, moving to a multi-tenant model, where a single instance of each service serves many projects:
This frees up as much resources as possible for your Postgres databases, while enabling us to offer more resource intensive features for these services, and opens the door to capabilities such as zero-downtime scaling.
With Supavisor replacing PgBouncer, along with some other key optimizations, the final pieces of our v2 architecture are now ready.
We’ve already fully rolled out our v2 architecture to paid plan projects. You now have more resources available, for the same price that you’ve been paying.
Free plan gradual rollout (20 March 2024 onwards)#
20 March 2024: Newly created or unpaused projects will use v2 architecture
28 March 2024: Existing projects will start being migrated to v2 architecture
This will be a gradual rollout - we will email you at least one week before your project is scheduled to be migrated.
Your action for projects scheduled to be migrated#
For newly created or unpaused projects on the Free Plan, no action is required.
For existing projects on the Free Plan, up to a few minutes of downtime is expected for the migration. For each of your projects, we’ll identify the 30-minute maintenance window where your project had the least database queries over the previous 10 weeks.
You have two choices:
Automatic Migration: If you don't take any action, we plan to do the migration automatically during that maintenance window with the least historical activity.
Manual Migration: Any time before that, you can go to Project Settings > General to see whether/when the maintenance window is scheduled (timings will also be included in the email). There, you may choose to manually restart the project yourself, at a time that is convenient for you. Your project will be restarted on v2 architecture.
Conversational AI assistant now available as part of the SQL Editor#
As part of our ongoing efforts to introduce the AI assistant across the dashboard, we're bringing the AI assistant to the SQL Editor next! Some of you might have already been using the AI assistant in the SQL Editor through the green bar at the top of the editor - we're sprucing it up by extending it further to a conversational UX. Go back and forth with the assistant and apply the code snippets that you deem to be the most appropriate!
This is currently under a feature preview - you may enable this feature by clicking on the user icon while in a project at the bottom of the side navigation bar and selecting "Feature previews". From there just enable the preview under "SQL Editor Conversational Assistant". And as always, we're incredibly open to any feedback for this, so give us a shout right here!
Matryoshka Embeddings: Faster OpenAI Vector Search Using Adaptive Retrieval#
Learn about how OpenAI’s newest text embeddings models, text-embedding-3-small and text-embedding-3-large, are able to truncate their dimensions with only a slight loss in accuracy.
Easily Connect to Supabase Projects From Frameworks and ORMs of Your Choice#
Connect to Supabase from any framework or ORM with our new “Connect” panel in Studio. This displays simple setup snippets that you can copy and paste into your application. We’ve started with a selection of popular frameworks and ORMs and you can request more by feature request or pull request.
PostgREST v12 has been released, and with it, comes the release of the highly requested aggregate functions, avg(), count(), sum(), min(), and max(), that is used to summarize data by performing calculations across groups of rows.
Terraform Provider to Manage Resources on Supabase Platform#
We’ve created an official Supabase Provider for Terraform to version-control your project settings in Git. You can use this provider in CI/CD pipelines to automatically provision projects and branches and keep configuration in code.
Support for Composite Foreign Keys in Table Editor#
We've shifted the management of foreign keys into the Table Editor’s side panel so you can easily see all foreign keys pertaining to a table as well as referencing columns to composite foreign keys.
Build a Content Recommendation App With Flutter and OpenAI#
Learn about how we built a movie listing app that recommends another movie based on the movie that a user is currently viewing built with Supabase, Flutter, and OpenAI.
Performance testing evaluates a system's compliance with its performance requirements. It reveals your app’s ability to handle user load, unexpected spikes, or recover from stressful workloads. In this blog post you will learn about how we automated our performance testing.
If you're not aware yet, we previously created a new RLS UI that comes integrated with the Supabase Assistant to (hopefully) help everyone write RLS policies easier and faster. This is currently still a feature preview which you can enable by clicking on your user profile at the bottom of the side navigation bar. We're continuously trying to see how we can improve this to make it a much better UX than the current existing RLS policy user flow.
The first gap that we're trying to address is the ease of referencing existing templates that just work out of the box from the current RLS policy flow - those proved to be really useful when trying to understand the syntax of writing policies, and so we added that in to the new RLS UI. Not just that but we also added more complex templates that work better in the new UI than the current one!
The next item that we're looking into is to see what minimal guard rails we can add to make writing RLS policies even less intimidating since the new UI expects only SQL input. One of the aims of the dashboard is to guide our users to not be afraid of SQL no matter the level of proficiency and we hope that we'll be able to cook up the ideal UX that will allow everyone to write SQL with confidence.
We received many feedback that the icons alone in the navigation bar are not too intuitive in understanding what page they're navigating too. So finally, we're adding some textual cues that show up on hover to the navigation bar in hopes to make navigating around the dashboard easier!
For the users who leverage on SQL to analyze data, this should be useful for you! You can now plot your data points through the SQL editor after running your query. Choose which columns to be your axes and you're good to go. As always - feel free to drop any feedback for us on this! We're keen to see how else we can make this feature better and stronger 😄
Foreign Key Management re-introduced into the Column side panel editor#
We previously made an update in the Table Editor to shift the management of foreign keys to the table editor as an effort to properly support composite foreign keys. This understandably caused the UX to suffer as we received many feedback around creating simple 1:1 foreign key relations much more troublesome. We've thus re-introduced being able to manage your foreign keys while editing a column! Thank you so much for everyone's feedback around this - it's something that we genuinely appreciate our community for! 🙏
Intellisense for the SQL editor was always enabled by default for everyone, but we're now making this a toggleable feature - this is more specifically useful for large projects with many tables as we've noticed the amount of data we try to load into intellisense causes the SQL editor to slow down noticeable (likely due to browser memory issues).
Paid organizations can now launch projects on bigger compute immediately#
Paid plan users can now immediately launch projects on larger compute sizes. Previously, paid organizations had to launch projects on the default "Micro" instance and then separately upgrade their instance. You can always up and downgrade your instance in hindsight. Feel free to leave any feedback in our discussions here!
As mentioned in last week's changelog (and also as always 😉) we see everyone's feedback regarding the changes to the table editor search input and have enacted a slight change to make the search action more prominent and easier to click on! Again, thank you to everyone for sounding your thoughts, we genuinely appreciate them as it helps us guide the dashboard's DX to be optimal - keep em coming!
Separetely - we're also aware of the feedback regarding our change in the way you manage your foreign keys as announced in the changelog discussion 2 weeks ago - fret not! We're actively looking into that as well 🙂
For those who might have tables or SQL queries with long names, this should help alleviate some issues with the names truncating. Hopefully it'll be easier to find your tables / SQL queries! 😊
Connecting to your project - added Expo React Native guides#
Last week we announced a quicker way to get your project's connection parameters on the project's home page and we're heartened to already see some community contributions to add more content for different frameworks! Shoutout to @Hallidayo for the help on this - we're always keeping an eye out for more of such contributions 😄
Paid plan users can now immediately launch projects on larger compute sizes. Previously, paid organizations had to launch projects on the default "Micro" instance and then separately upgrade their instance. You can always up- and downgrade your instance in hindsight.
A quicker way to get your project's connection parameters#
We've made retrieving your project's connection parameters more easily accessible by adding a "Connect" button to each projects' homepage. This will show you some quick instructions on how to either connect to your database directly, or connect to your project via some app frameworks and ORMs. Hopefully this will help both new and familiar developers on Supabase to get to building quicker without having to jump around the dashboard to find these information.
We're in the midst of revising the UX around the table editor to ensure that controls aren't sprawled across the page despite us building more and more features - and this is just the first step of more to come. Icons for tables and views have been tweaked to be more minimal, each table has an indicator to whether RLS has been enabled or not, and the search bar has been made a tad sleeker. As an assurance, we definitely hear everyone's feedback about the changes here in particular with the search bar being less visible and are actively looking to improve the experience here! 🙏
We've had some feedback from users that they'd want a convenient way to check on their project's users from the UI rather than having to go through the Table Editor or SQL Editor to query the auth.users table, and so we've gone ahead to ship this one.
We hear you! This has been a very popular request by everyone and we're happy to make the first step to improving the UX around your SQL snippets. You can now delete your queries in bulk - gone are the days of rows full with Untitled queries 😄 Fret not, we're also aware that everyone is also requesting for better organization of snippets (specifically folders) - we're actively figuring out how best to bring that UX into the dashboard for everyone so be sure to watch this space 😉
Unoptimized queries are a major cause of poor database performance - which is why the query performance report was initially built. We're equipping this page with better tooling, allowing users to:
Search by query or role
Sort results by latency
Expand results to view the full query that was run
As always, if there's anything more we can do for you, feel free to give us a shout in the feedback widget up top 🙂
In an effort to make our UI more consistent + coherent across all products, we've revamped the Logs Explorer to look just like the SQL Editor in hopes that there's less UI for users to learn, and users can just stay focused on doing what they want to do.
Support for composite foreign keys in table editor#
The previous UI for managing foreign keys in the table editor had functional limitations as it assumed single column relations for foreign keys (overly simplified). We've thus shifted the management of foreign keys into the table side panel editor instead. You can manage all foreign keys across all columns on the table in one place, rather than going into each column individually to do so.
Supavisor replaces PgBouncer for database connection pooling#
We’re deprecating PgBouncer and migrating all projects to our Supavisor connection pooler. Go grab the pooler connection string in your project’s Database Settings.
[ACTION REQUIRED] AWS is deprecating IPv4, so we’ve migrating your project to IPv6. If your network supports IPv6 and/or you’re using PostgREST then you don’t need to make any changes. Otherwise, you need to update any connections to Supavisor’s connection pooler. We’ve also made IPv4 addresses available to purchase (passing on the cost from AWS).
We value transparency here at Supabase and that includes ensuring our users having clear visibility over what they are paying for. We've added some details in the organization billing breakdown section to show what are the "Current costs", on top of the "Projected costs" for the organization, and also added information regarding both of them in the form of tooltips. Compute credits are also shown here to ensure that those are considered in the costs calculation.
SQL Editor preview snippets by hovering over them in the navigation menu#
In efforts to hopefully to make it easier to find your queries. We're aware that users are facing difficulty in managing their SQL queries, in particular when the number of queries grow really big - we're actively looking into how to make things better 🙏 Watch this space!
Table Editor edit text cells with larger real estate#
We've seen some users reaching out to us via feedback that they'd like a larger editor to edit their text-based column cells on the table editor - we hear you! And we've also sprinkled in some Markdown preview for those who might need it 🙂
Many users have been requesting for this, and so has the team internally - you may now preview your HTML email templates right here in the dashboard. Gone are the days having to manually check your templates elsewhere, hopefully this will make your development lives a little easier.
cron schema from the pg-cron extension no longer editable via the GUI#
The schema is nonetheless still editable from the SQL editor by writing queries directly. This is in efforts to prevent directly inserting/updating rows on the pg_cron extension's cron.job table as it bypasses security checks that would've been asserted when jobs are scheduled/modified via pg_cron functions. More information here.
We're stream-lining the connection UI on the database settings page to be more concise and simpler in hopes to improve the UX around connecting to your project's database, and a push to using the connection pooler as a recommended practice. As always, feel free to drop us feedback via the feedback widget in the navigation bar up top - we promise that we look at every single one of the feedback that comes in.
IPv4 add-on now available to allow direct connections to your database via an IPv4 address#
You may consider enabling this add-on via the dashboard if you're not planning on using our connection pooler (which still supports IPv4) and your environment does not support IPv6. More information regarding this add-on here.
Removed deprecated LinkedIn Provider from Auth Providers#
The LinkedIn provider was scheduled to be deprecated in favour of the current LinkedIn OIDC provider with a notice set up since November 2023 and a couple of email notifications sent out to our users. The LinkedIn provider is now removed from our Auth Providers page. Please do reach out to us via support if you might have been affected by this!
Supavisor, our connection pooler, does not support using Network Restrictions at the moment. Support for Network Restrictions will be enabled on the 24th of January 2024.
If you are not using Supavisor, this change does not affect you.
Starting the 24th, projects with existing network restrictions will have their Supavisor configuration automatically updated with the same restrictions. Changes to any project’s network restrictions will also be automatically propagated to Supavisor.
If direct connections to your database resolve to a IPv6 address, you need to add both IPv4 and IPv6 CIDRs to the list of allowed CIDRs. Network Restrictions will be applied to all database connection routes, whether pooled or direct. You will need to add both the IPv4 and IPv6 networks you want to allow. There are two exceptions: if you have been granted an extension on the IPv6 migration OR if you have purchased the IPv4 add-on, you need only add IPv4 CIDRs.
Additionally, we're working on making this even better by offering dedicated IPv4 addresses which help with IP allowlisting and network restrictions. There will be another update once dedicated IPv4 addresses are ready to use.
Manage column-level privileges via the dashboard. This was a long time coming with many users looking forward to the addition of this to the dashboard. Huge shoutout once again to @HTMHell and everyone for their patience while we got this through the gates 🙏
Table editor support for copying cells via keyboard shortcut#
You can now copy cell values in the Table Editor via Cmd+c and Cmd+v (or Ctrl+c or Ctrl+v)! Hopefully this makes managing your data a little more easier.
Table editor prevent deleting all rows in a table through the GUI while impersonating a role#
We're disabling the "Delete rows" action in the Table Editor when you've selected all rows in a table, and are impersonating a role due to an issue which @jacob-8 found (appreciate your report! 🙏) The issue mentioned can be found here.
A rundown of everything we shipped during Launch Week X
Day 1 - Supabase Studio: AI Assistant and User Impersonation#
Supabase Studio received a major update that reflects our commitment to a SQL-first approach and user-centric development. Awesome features like easy RLS policies with an AI assistant, Postgres Roles, User Impersonation, and much more.
Day 2 - Edge Functions: Node and native npm compatibility#
Edge Functions now natively supports npm modules and Node built-in APIs. You can directly import millions of popular, commonly used npm modules into your Edge Functions.
Day 4 - Supabase Auth: Identity Linking, Hooks, and HaveIBeenPwned integration#
We announced several new features for Supabase Auth: Identity Linking, Session Control, Leaked Password Protection, and Auth Hooks with Postgres functions.
This is a huge one for anyone wanting to serve data closer to the users or distribute loads across multiple databases. Learn how we implemented Read Replicas and how to use them in your projects.
Pools now start with only 10 connections and create new ones up to tenant default_pool_size or user pool_size
Logs correct tenant id
API endpoints accept PATCH requests
Previously Supavisor would start a deterministic number of connection depending on the pool size specified on the tenant or tenant user.
Now Supavisor will start 10 connections and then allocate more as needed.
It was pretty easy to over-allocate your database max_connections without fully understanding what your max_connections were and how many were currently being used by other services.
This also makes migration from PgBouncer easier as it's much safer to run multiple connection poolers at the same time as you migrate, as long as they both don't need to allocate their full database connection pools.
Also an implied behavior of Supavisor was that each user connected spins up it's own pool. Without understanding this behavior it's easy to over-allocate database connections by connecting different Posgres users to the pooler.
Need to store images, audio, and video clips? Well now you can do it on Supabase Storage. It's backed by S3 and our new OSS storage API written in Fastify and Typescript. Read the full blog post.
The Supabase API already handles Connection Pooling, but if you're connecting to your database directly (for example, with Prisma) we now bundle PgBouncer. Read the full blog post.
We open sourced our internal UI component library, so that anyone can use and contribute to the Supabase aesthetic. It lives at ui.supabase.io . It was also the #1 Product of the Day on Product Hunt.
Now you can run Supabase locally in the terminal with supabase start. We have done some preliminary work on diff-based schema migrations, and added some new tooling for self-hosting Supabase with Docker. Blog post here.
Thanks to a comunity contribution (@_mateomorris and @Beamanator), Supabase Auth now includes OAuth scopes. These allow you to request elevated access during login. For example, you may want to request access to a list of Repositories when users log in with GitHub. Check out the Documentation.
New year, new features. We've been busy at Supabase during January and our community has been even busier. Here's a few things you'll find interesting.
Anyone who has worked with Firebase long enough has become frustrated over the lack of count functionality. This isn't a problem with PostgreSQL! Our libraries now have support for PostgREST's exact, planned, and estimated counts. A massive thanks to @dshukertjr for this adding support to our client library.
We enabled 2 new Auth providers - Facebook and Azure. Thanks to @Levet for the Azure plugin, and once again to Netlify's amazing work with GoTrue to implement Facebook.
In case our Auth endpoints aren't easy enough already, we've built a React Auth Widget for you to drop into your app and to get up-and-running in minutes.
Performance: We migrated all of our subdomains to Route53, implementing custom Let's Encrypt certs for your APIs. As a result, our read benchmarks are measuring up 12% faster.
Performance: We upgrade your databases to the new GP3 storage for faster and more consistent throughput.
Last month we announced an improved SQL Editor, and this month we've taken it even further. The SQL Editor is now a full Monaco editor, like you'd find in VS Code. Build your database directly from the browser.
We're now 8 months into building Supabase. We're focused on performance, stability, and reliability but that hasn't prevented us from shipping some great features.
In the lead-up to our Beta launch, we've releasedsupabase-js version 1.0 and it comes with some major Developer Experience improvements. We received a lot of feedback from the community and we've incorporated it into our client libraries for our 1.0 release.
Although it was only intended to be a temporary feature, the SQL Editor has become one of the most useful features of Supabase. This month we decided to make give it some attention, adding Tabs and making it full-screen. This is the first of many updates, we've got some exciting things planned for the SQL Editor.
For the heavy table editor users, we've gone ahead and added a bunch of key commands and keyboard shortcuts so you can zip around and manipulate your tables faster than ever.
One of the most requested Auth features was the ability to send magic links that your users can use to log in. You can use this with new or existing users, and alongside passwords or stand alone.
This month, we're ecstatic to announce a feature we think you'll love: Supabase Auth. It's too big to fit into a monthly update so look out for a full update in the next few days.
We want to make it easy to get started adding Auth to your app, so we've released a simple example and a video tutorial which shows you how to implement a basic auth system using PostgreSQL's Row Level Security.