Skip to content

Conversation

@dbsid
Copy link
Contributor

@dbsid dbsid commented Dec 29, 2025

First-time contributors' checklist

What is changed, added or deleted? (Required)

Which TiDB version(s) do your changes apply to? (Required)

Tips for choosing the affected version(s):

By default, CHOOSE MASTER ONLY so your changes will be applied to the next TiDB major or minor releases. If your PR involves a product feature behavior change or a compatibility change, CHOOSE THE AFFECTED RELEASE BRANCH(ES) AND MASTER.

For details, see tips for choosing the affected versions.

  • master (the latest development version)
  • v9.0 (TiDB 9.0 versions)
  • v8.5 (TiDB 8.5 versions)
  • v8.1 (TiDB 8.1 versions)
  • v7.5 (TiDB 7.5 versions)
  • v7.1 (TiDB 7.1 versions)
  • v6.5 (TiDB 6.5 versions)
  • v6.1 (TiDB 6.1 versions)
  • v5.4 (TiDB 5.4 versions)

What is the related PR or file link(s)?

  • This PR is translated from:
  • Other reference link(s):

Do your changes match any of the following descriptions?

  • Delete files
  • Change aliases
  • Need modification after applied to another branch
  • Might cause conflicts after applied to another branch

@ti-chi-bot ti-chi-bot bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 29, 2025
@dbsid dbsid marked this pull request as draft December 29, 2025 07:34
@ti-chi-bot ti-chi-bot bot added contribution This PR is from a community contributor. needs-ok-to-test Indicates a PR created by contributors and need ORG member send '/ok-to-test' to start testing. labels Dec 29, 2025
@ti-chi-bot
Copy link

ti-chi-bot bot commented Dec 29, 2025

Hi @dbsid. Thanks for your PR.

I'm waiting for a pingcap member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ti-chi-bot ti-chi-bot bot added the missing-translation-status This PR does not have translation status info. label Dec 29, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @dbsid, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces detailed documentation for the new TiDB X architecture, outlining its fundamental shift towards a cloud-native, 'Share-Everything' design. It addresses limitations of the classic TiDB architecture by implementing innovations such as object storage as the single source of truth, a 'Compute and Compute' separation for workload isolation, and an LSM Forest storage engine, all aimed at enhancing scalability, stability, cost-effectiveness, and performance predictability.

Highlights

  • New TiDB X Architecture Documentation: This pull request introduces comprehensive documentation for the new TiDB X architecture, detailing its fundamental shift from a 'Share-Nothing' to a 'Share-Everything' Service-Oriented Architecture (SOA) designed for the AI era and massive cloud scalability.
  • Object Storage Integration: TiDB X leverages object storage (e.g., Amazon S3) as the single source of truth for all data, enabling faster scaling, improved backup mechanisms, and instant node provisioning by decoupling data from local disks.
  • Compute-Compute Separation: The architecture introduces a novel 'Separation of Compute and Compute' design, isolating online transactional workloads (lightweight compute) from heavy maintenance tasks (heavy compute) to ensure predictable performance and optimized Total Cost of Ownership (TCO).
  • LSM Forest Storage Engine: TiDB X redesigns the storage engine from a single LSM-tree to an LSM Forest, assigning each Region its own independent LSM Tree. This eliminates compaction overhead and global mutex contention during cluster operations, improving stability and performance.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@ti-chi-bot ti-chi-bot bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Dec 29, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new documentation page for the TiDB X architecture. The content provides a good high-level overview of the new architecture, its motivations, and key innovations. My review focuses on improving clarity, correcting some typos and grammatical errors, and ensuring consistency in terminology, as per the repository's style guide. I've provided several suggestions to enhance readability and technical accuracy.

@dbsid dbsid changed the title (WIP)Tidb x architecture (WIP)TiDB x Architecture Dec 30, 2025
@siddontang
Copy link
Member

Review Comments

Terminology Suggestion: Share-Everything → Shared-Storage

The term Share-Everything is imprecise and creates an awkward comparison (as @likidu noted - comparing a distributed database to an SOA is not apples-to-apples).

Recommendation: Consider using Shared-Storage instead:

  • Maintains linguistic symmetry with Share-Nothing
  • Accurately describes what is actually shared (the storage layer, not everything)
  • Industry-aligned terminology (similar to disaggregated storage used by Snowflake and modern data platforms)

Alternative framing for the intro:

TiDB X transitions from a Share-Nothing architecture with local storage to a disaggregated architecture with shared object storage as the persistent layer.


Structural Issues

  1. Title mismatch (lines 2 vs 7): Frontmatter says title: TiDB X Architecture but the first heading is # TiDB X Introduction. These should align.

  2. Heading levels: Since the frontmatter provides the page title, content headings should start at ## not #.

  3. Missing newline at EOF (line 109): Add a trailing newline.


Content Issues

  • Line 18: Incomplete sentence fragment - The Share-nothing architecture of TiDB Classic, effectively overcoming... needs restructuring
  • Line 38: Typo - architecute should be architecture
  • Line 43: Missing space - TiProxy(or load balancers) should be TiProxy (or load balancers)

PR Description

Please fill in the required What is changed, added or deleted? section before moving out of draft.


Overall the content is well-structured and provides good technical depth. The diagrams help illustrate the architectural differences clearly.

dbsid added 3 commits January 2, 2026 19:20
change share-everything to share-storage

change TiDB Classic to classic TiDB to align the terminology.
@lilin90 lilin90 added the translation/no-need No need to translate this PR. label Jan 4, 2026
@ti-chi-bot ti-chi-bot bot removed the missing-translation-status This PR does not have translation status info. label Jan 4, 2026
@lilin90 lilin90 added the area/tidb-cloud This PR relates to the area of TiDB Cloud. label Jan 4, 2026
@lilin90
Copy link
Member

lilin90 commented Jan 4, 2026

/ok-to-test

@ti-chi-bot ti-chi-bot bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 12, 2026
@lilin90 lilin90 changed the base branch from release-8.5 to master January 12, 2026 03:43
@ti-chi-bot ti-chi-bot bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jan 12, 2026
@lilin90 lilin90 changed the base branch from master to release-8.5 January 12, 2026 03:45
@ti-chi-bot ti-chi-bot bot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 12, 2026
@lilin90 lilin90 changed the base branch from release-8.5 to master January 12, 2026 03:46
@ti-chi-bot ti-chi-bot bot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jan 12, 2026

The architecture diagram highlights a sophisticated separation of duties, ensuring that different types of work do not interfere with each other. The top "Isolated SQL Layer" consists of separate groups of compute nodes, which allows for multi-tenancy or workload isolation where different applications can have dedicated compute resources while sharing the same underlying data. Beneath this, the "Shared Services" layer breaks down heavy database tasks into independent microservices for operations like compaction, analyze, and DDL. By offloading expensive background operations—such as adding an index, Online DDL, or massive data imports—to the Shared Services layer, the system ensures these heavy jobs never compete for CPU or memory with the "Compute" nodes serving online user traffic. This guarantees predictable performance for critical applications and allows each component—Gateway, SQL Compute, Cache, and Background Services—to scale independently based on specific bottlenecks.

# Key innovations of TiDB X
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some comparisons with classic TiDB in this section, such as “6TB+ data per TiKV node or 200k+ SST files,” which are duplicated in the section “The Architectural Ceiling: Challenges Hard to Overcome.” It would be better to remove the redundant descriptions.


## Rapid Elastic Scalability (5x-10x Faster)

In TiDB X, data resides in shared object storage with fully isolated LSM-trees for each Region. The system eliminates the need for physical data migration or compaction when adding or removing TiKV nodes. The result is a 5x–10x improvement in scaling speed compared to classic TiDB, maintaining stable latency for online traffic.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are many key advantages. It is recommended to end with a summary section that includes a comparison between TiDB Classic and TiDB X in key areas, making it easier for users to remember.

Copy link
Member

@lilin90 lilin90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll continue my review soon.

@@ -0,0 +1,111 @@
---
title: TiDB X Architecture
summary: Learn how TiDB X's shared-storage architecture delivers cloud-native scalability and cost optimization
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
summary: Learn how TiDB X's shared-storage architecture delivers cloud-native scalability and cost optimization
summary: Learn how the shared-storage, cloud-native TiDB X architecture delivers elastic scalability, predictable performance, and optimized total cost of ownership.


# TiDB X Architecture

TiDB X represents a fundamental architectural evolution from classic TiDB's Shared-Nothing design to a cloud-native Shared-Storage architecture. By leveraging object storage as the single source of truth, TiDB X introduces "Separation of Compute and Compute" design that isolates online transactional workloads from heavy background tasks. This architecture enables elastic scalability, predictable performance, and optimized Total Cost of Ownership (TCO) for AI-era workloads.
Copy link
Member

@lilin90 lilin90 Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reason:

  • Since this doc is about intro to TiDB X, the first intro should tell what it is clearly to help users quickly get it if they don't want to read details.
  • The phrase "single source of truth" is marketing-oriented, not recommended in official docs.
  • Clarify what is being separated (compute workloads), not just repeating the slogan.
  • Use lowercase “total cost of ownership (TCO)” on first occurrence (common style-guide practice).

Definition reference:

Suggested change
TiDB X represents a fundamental architectural evolution from classic TiDB's Shared-Nothing design to a cloud-native Shared-Storage architecture. By leveraging object storage as the single source of truth, TiDB X introduces "Separation of Compute and Compute" design that isolates online transactional workloads from heavy background tasks. This architecture enables elastic scalability, predictable performance, and optimized Total Cost of Ownership (TCO) for AI-era workloads.
TiDB X is a new distributed SQL architecture that makes cloud-native object storage the backbone of TiDB. This architecture enables elastic scalability, predictable performance, and optimized total cost of ownership (TCO) for AI-era workloads.
TiDB X represents a fundamental evolution from [classic TiDB](/tidb-architecture.md)'s shared-nothing architecture to a cloud-native shared-storage architecture. By leveraging object storage as the shared persistent storage layer, TiDB X introduces a separation of compute workloads that isolates online transactional processing from resource-intensive background tasks.


TiDB X represents a fundamental architectural evolution from classic TiDB's Shared-Nothing design to a cloud-native Shared-Storage architecture. By leveraging object storage as the single source of truth, TiDB X introduces "Separation of Compute and Compute" design that isolates online transactional workloads from heavy background tasks. This architecture enables elastic scalability, predictable performance, and optimized Total Cost of Ownership (TCO) for AI-era workloads.

This document details the challenges of the classic TiDB architecture, the architecture of TiDB X, and its key innovations.
Copy link
Member

@lilin90 lilin90 Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This document details the challenges of the classic TiDB architecture, the architecture of TiDB X, and its key innovations.
This document introduces the TiDB X architecture, explains the motivation behind TiDB X, and describes the key innovations compared with the classic TiDB architecture.

@lilin90 lilin90 requested a review from qiancai January 16, 2026 10:13

### Strengths of classic TiDB

The "Shared-Nothing" architecture of classic TiDB effectively overcame the limitations of traditional monolithic databases. By decoupling compute from storage and utilizing the Raft consensus algorithm, it delivered a level of resilience and scale that defined the modern NewSQL era.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Use capitalized letters only when necessary.
  • Use present tense in most cases. Since classic TiDB still exists and serves customers, using the present tense is ok.
Suggested change
The "Shared-Nothing" architecture of classic TiDB effectively overcame the limitations of traditional monolithic databases. By decoupling compute from storage and utilizing the Raft consensus algorithm, it delivered a level of resilience and scale that defined the modern NewSQL era.
The shared-nothing architecture of classic TiDB addresses the limitations of traditional monolithic databases. By decoupling compute from storage and utilizing the Raft consensus algorithm, it provides the resilience and scalability required for distributed SQL workloads.


The "Shared-Nothing" architecture of classic TiDB effectively overcame the limitations of traditional monolithic databases. By decoupling compute from storage and utilizing the Raft consensus algorithm, it delivered a level of resilience and scale that defined the modern NewSQL era.

Its success was built on several foundational strengths:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Its success was built on several foundational strengths:
The classic TiDB architecture is built on several foundational capabilities:


Its success was built on several foundational strengths:

- Massive Horizontal Scalability: Classic TiDB allowes businesses to scale both read and write performance linearly with their workload, reaching millions of QPS while supporting massive clusters with over 1 PiB of data and tens of millions of tables.
Copy link
Member

@lilin90 lilin90 Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Limit the use of marketing tone.
  • Make sentences easy to read.
Suggested change
- Massive Horizontal Scalability: Classic TiDB allowes businesses to scale both read and write performance linearly with their workload, reaching millions of QPS while supporting massive clusters with over 1 PiB of data and tens of millions of tables.
- Horizontal scalability: It supports linear scaling for both read and write performance. Clusters can scale to handle millions of queries per second (QPS) and manage over 1 PiB of data across tens of millions of tables.

Its success was built on several foundational strengths:

- Massive Horizontal Scalability: Classic TiDB allowes businesses to scale both read and write performance linearly with their workload, reaching millions of QPS while supporting massive clusters with over 1 PiB of data and tens of millions of tables.
- True HTAP Capabilities: It unified transactional and analytical processing. By pushing down heavy aggregation and join operations to TiFlash (the columnar engine), it provided predictable, real-time analytics on fresh transactional data without complex ETL pipelines.
Copy link
Member

@lilin90 lilin90 Jan 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- True HTAP Capabilities: It unified transactional and analytical processing. By pushing down heavy aggregation and join operations to TiFlash (the columnar engine), it provided predictable, real-time analytics on fresh transactional data without complex ETL pipelines.
- Hybrid Transactional and Analytical Processing (HTAP): It unifies transactional and analytical workloads. By pushing down heavy aggregation and join operations to TiFlash (the columnar storage engine), it provides predictable, real-time analytics on fresh transactional data without complex ETL pipelines.


- Massive Horizontal Scalability: Classic TiDB allowes businesses to scale both read and write performance linearly with their workload, reaching millions of QPS while supporting massive clusters with over 1 PiB of data and tens of millions of tables.
- True HTAP Capabilities: It unified transactional and analytical processing. By pushing down heavy aggregation and join operations to TiFlash (the columnar engine), it provided predictable, real-time analytics on fresh transactional data without complex ETL pipelines.
- Non-Blocking Operations: Its implementation of Fully Online DDL meant that schema changes were non-blocking for reads and writes, allowing businesses to evolve their data models with minimal impact on latency or uptime.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Non-Blocking Operations: Its implementation of Fully Online DDL meant that schema changes were non-blocking for reads and writes, allowing businesses to evolve their data models with minimal impact on latency or uptime.
- Non-blocking schema changes: It utilizes a fully online DDL implementation. Schema changes do not block reads or writes, allowing data models to evolve with minimal impact on application latency or availability.

Comment on lines +25 to +26
- Always-Online Availability: The architecture supported seamless cluster upgrades and scaling operations (up/down), ensuring critical services remained online during maintenance.
- Freedom from Lock-in: As an open-source solution supporting AWS, GCP, and Azure, it offered true cloud neutrality, preventing vendor lock-in.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Always-Online Availability: The architecture supported seamless cluster upgrades and scaling operations (up/down), ensuring critical services remained online during maintenance.
- Freedom from Lock-in: As an open-source solution supporting AWS, GCP, and Azure, it offered true cloud neutrality, preventing vendor lock-in.
- High availability: It supports seamless cluster upgrades and scaling operations. This ensures that critical services remain accessible during maintenance or resource adjustment.
- Multi-cloud support: It operates as an open-source solution with support for Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. This provides cloud neutrality without vendor lock-in.


### Challenges of classic TiDB

Despite these massive achievements, the "Shared-Nothing" architecture of classic TiDB, where storage and compute are tightly coupled on local nodes—eventually hit physical limitations in extreme large-scale environments. As data volumes exploded and cloud-native expectations evolved, inherent structural challenges emerged that were difficult to resolve without a fundamental redesign.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Despite these massive achievements, the "Shared-Nothing" architecture of classic TiDB, where storage and compute are tightly coupled on local nodes—eventually hit physical limitations in extreme large-scale environments. As data volumes exploded and cloud-native expectations evolved, inherent structural challenges emerged that were difficult to resolve without a fundamental redesign.
While the shared-nothing architecture of classic TiDB provides high resilience, the tight coupling of storage and compute on local nodes introduces limitations in extreme large-scale environments. As data volumes grow and cloud-native requirements evolve, several structural challenges emerge.

Comment on lines +32 to +34
- Scalability limitations: In classic TiDB, scaling out (adding nodes) or scaling in (removing nodes) requires physically copying massive amounts of data (SST files) between nodes. This process is time-consuming for large datasets and can impact online traffic due to the heavy CPU and I/O required to move data.

The underlying storage engine (RocksDB) in classic TiDB uses a single LSM-tree protected by a global mutex. This creates a scalability ceiling where the system struggles to handle large datasets (e.g., 6TB+ data or 200k+ SST files per TiKV node), preventing it from utilizing the full capacity of the hardware.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Scalability limitations: In classic TiDB, scaling out (adding nodes) or scaling in (removing nodes) requires physically copying massive amounts of data (SST files) between nodes. This process is time-consuming for large datasets and can impact online traffic due to the heavy CPU and I/O required to move data.
The underlying storage engine (RocksDB) in classic TiDB uses a single LSM-tree protected by a global mutex. This creates a scalability ceiling where the system struggles to handle large datasets (e.g., 6TB+ data or 200k+ SST files per TiKV node), preventing it from utilizing the full capacity of the hardware.
- **Scalability limitations**
- Data movement overhead: In classic TiDB, scaling out (adding nodes) or scaling in (removing nodes) operations require physical movement of SST files between nodes. For large datasets, this process is time-consuming and can degrade online traffic performance due to heavy CPU and I/O consumption during data movement.
- Storage engine bottleneck: The underlying RocksDB storage engine in classic TiDB uses a single LSM-tree protected by a global mutex. This design creates a scalability ceiling where the system struggles to handle large datasets (for example, over 6 TiB of data or 200,000 SST files per TiKV node), preventing the system from fully utilizing the hardware capacity.

Comment on lines +36 to +38
- Stability and performance challenges: Heavy write traffic triggers massive local compaction jobs to merge SST files. In the Classic architecture, these compaction jobs run on the same TiKV nodes serving online traffic, consuming significant CPU and I/O resources and can impact the online traffic.

There is no physical isolation between logical regions and physical SST files. Operations like adding an index or moving a region (balancing) create compaction overhead that competes directly with user queries, leading to performance jitter. Under heavy write pressure, if the background compaction cannot keep up with the foreground write traffic, the system can trigger flow control mechanisms to protect the storage engine, which results in write throughput throttling and latency spikes for the application.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Stability and performance challenges: Heavy write traffic triggers massive local compaction jobs to merge SST files. In the Classic architecture, these compaction jobs run on the same TiKV nodes serving online traffic, consuming significant CPU and I/O resources and can impact the online traffic.
There is no physical isolation between logical regions and physical SST files. Operations like adding an index or moving a region (balancing) create compaction overhead that competes directly with user queries, leading to performance jitter. Under heavy write pressure, if the background compaction cannot keep up with the foreground write traffic, the system can trigger flow control mechanisms to protect the storage engine, which results in write throughput throttling and latency spikes for the application.
- **Stability and performance interference**
- Resource contention: Heavy write traffic triggers massive local compaction jobs to merge SST files. In classic TiDB, because these compaction jobs run on the same TiKV nodes serving online traffic, they compete for the same CPU and I/O resources, which might affect the online application.
- Lack of physical isolation: There is no physical isolation between logical Regions and physical SST files. Operations like adding an index or moving a region (balancing) create compaction overhead that competes directly with user queries, leading to potential performance jitter.
- Write throttling: Under heavy write pressure, if the background compaction cannot keep up with the foreground write traffic, the classic TiDB triggers flow control mechanisms to protect the storage engine. This results in write throughput throttling and latency spikes for the application.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/tidb-cloud This PR relates to the area of TiDB Cloud. contribution This PR is from a community contributor. needs-cherry-pick-release-8.5 Should cherry pick this PR to release-8.5 branch. ok-to-test Indicates a PR is ready to be tested. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. translation/no-need No need to translate this PR.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants