高性能mysql 1
高性能mysql 1
参考:
- 博客 https://segmentfault.com/a/1190000040374142
- 书籍📚’High performance mysql’
- I note some hard part ,translating it into Chinese for a better comprehension
- sometimes when I have some trouble with reading English version , I also try to write the Chinese down from my mind , helping me get absorbed in the passage
Forward
我多年来一直是这本书的忠实读者,而第三版让一本优秀的书变得更加出色。不仅世界级的专家在书中分享了他们的专业知识,而且他们还花费了时间更新并新增了高质量的章节。这本书不仅涵盖了如何从 MySQL 中获得高性能的诸多细节,更重要的是,它关注的是改进的过程,而不是简单的事实和琐事。这本书将帮助您了解如何改善性能,无论 MySQL 的行为随时间如何变化。
作者们因其经验、严谨的方法、对效率的关注以及对改进的承诺而具备了撰写本书的独特资格。说到经验,我指的是,作者们从 MySQL 不具备扩展性且缺乏监测工具的早期阶段起,就一直致力于 MySQL 性能优化,直到如今功能大为改善的阶段。说到严谨的方法,我指的是他们将性能优化视为一门科学,首先明确需要解决的问题,然后运用推理和测量手段来解决这些问题。
令我印象最深的是他们对效率的关注。作为顾问,他们没有充裕的时间。按小时收费的客户希望问题能被迅速解决。因此,作者们定义了流程并开发了工具,以高效且正确地完成工作。他们在书中描述了这些流程,并公开了工具的源代码。
最后,他们始终致力于提升自己的能力。这包括将关注点从吞吐量转向响应时间,致力于了解 MySQL 在新硬件上的性能,以及学习排队理论等新技能来更好地理解性能。
我相信这本书预示了 MySQL 的光明前景。随着 MySQL 逐步支持更高需求的工作负载,作者们也在社区内推进了对 MySQL 性能的理解。他们还通过 XtraDB 和 XtraBackup 直接为 MySQL 的改进作出了贡献。我一直在向他们学习,也希望您能抽时间从中受益。
—Mark Callaghan, Software Engineer, Facebook
Preface
我们撰写这本书的目的是为不仅是 MySQL 应用开发人员,还包括 MySQL 数据库管理员提供帮助。我们假定您已经对 MySQL 有一定的经验,同时也具备一定的系统管理、网络和类 Unix 操作系统的基础知识。
第二版为读者提供了大量的信息,但没有任何一本书能够涵盖一个主题的所有内容。在第二版和第三版之间,我们记录了成千上万个有趣的问题,这些问题是我们自己解决的,或是看到他人解决的。当我们开始为第三版制定大纲时,我们意识到,要全面覆盖这些主题可能需要三到五千页,但即便如此,这本书仍然无法做到完全全面。经过反思,我们认识到第二版强调深入覆盖的方式实际上是自我限制的,因为这种方式往往不能教会读者如何思考 MySQL。
因此,这本第三版与第二版的侧重点不同。我们仍然传递了大量的信息,并且依然强调可靠性和正确性等目标。但我们也试图赋予这本书更深层的意义:我们希望教授的是 MySQL 为什么以这种方式工作的原理,而不仅仅是它如何工作的事实。我们加入了更多的案例故事和研究,以展示这些原理的实际应用。通过这些内容,我们试图回答一些问题,例如:“鉴于 MySQL 的内部架构和运行机制,在实际使用中会产生什么样的影响?这些影响为何重要?这些影响如何使 MySQL 更适合(或不适合)特定需求?”
最终,我们希望您对 MySQL 内部机制的了解能够帮助您应对超出本书范围的情况。我们也希望这种新的见解能帮助您学习和实践一种方法论,用于设计、维护以及排查基于 MySQL 的系统。
How This Book Is Organized
We fit a lot of complicated topics into this book. Here, we explain how we put them together in an order that makes them easier to learn.
A Broad Overview
Chapter 1, MySQL Architecture and History is dedicated to the basics—things you’ll need to be familiar with before you dig in deeply. You need to understand how MySQL is organized before you’ll be able to use it effectively. This chapter explains MySQL’s architecture and key facts about its storage engines. It helps you get up to speed if you aren’t familiar with some of the fundamentals of a relational database, including transactions. This chapter will also be useful if this book is your introduction to MySQL but you’re already familiar with another database, such as Oracle. We also include a bit of historical context: the changes to MySQL over time, recent ownership changes, and where we think it’s headed.
Building a Solid Foundation
The early chapters cover material we hope you’ll reference over and over as you use MySQL.
Chapter 2, Benchmarking MySQL discusses the basics of benchmarking—that is, determining what sort of workload your server can handle, how fast it can perform certain tasks, and so on. Benchmarking is an essential skill for evaluating how the server behaves under load, but it’s also important to know when it’s not useful.
Chapter 3, Profiling Server Performance introduces you to the response time–oriented approach we take to troubleshooting and diagnosing server performance problems.This framework has proven essential to solving some of the most puzzling cases we’ve seen. Although you might choose to modify our approach (we developed it by modifying Cary Millsap’s approach, after all), we hope you’ll avoid the pitfalls of not having any method at all.
In Chapters 4 through 6, we introduce three topics that together form the foundation
for a good logical and physical database design. In Chapter 4, Optimizing Schema and Data Types, we cover the various nuances of data types and table design. Chapter 5,Indexing for High Performance extends the discussion to indexes—that is, physical database design. A firm understanding of indexes and how to use them well is essential or using MySQL effectively, so you’ll probably find yourself returning to this chapter repeatedly. And Chapter 6, Query Performance Optimization wraps the topics together by explaining how MySQL executes queries and how you can take advantage of its query optimizer’s strengths. This chapter also presents specific examples of many common classes of queries, illustrating where MySQL does a good job and how to transform queries into forms that use its strengths.
Up to this point, we’ve covered the basic topics that apply to any database: tables, indexes, data, and queries. Chapter 7, Advanced MySQL Features goes beyond the basics and shows you how MySQL’s advanced features work. We examine topics such as partitioning, stored procedures, triggers, and character sets. MySQL’s implementation of these features is different from other databases, and a good understanding of them can open up new opportunities for performance gains that you might not have thought about otherwise.
Configuring Your Application
The next two chapters discuss how to make MySQL, your application, and your hardware work well together. In Chapter 8, Optimizing Server Settings, we discuss how you can configure MySQL to make the most of your hardware and to be reliable and robust.
Chapter 9, Operating System and Hardware Optimization explains how to get the most out of your operating system and hardware. We discuss solid-state storage in depth, and we suggest hardware configurations that might provide better performance for larger-scale applications.Both chapters explore MySQL internals to some degree. This is a recurring theme that continues all the way through the appendixes: learn how it works internally, and you’ll be empowered to understand and reason about the consequences.
MySQL as an Infrastructure Component
MySQL doesn’t exist in a vacuum. It’s part of an overall application stack, and you’ll need to build a robust overall architecture for your application. The next set of chapters is about how to do that.
In Chapter 10, Replication, we discuss MySQL’s killer feature: the ability to set up multiple servers that all stay in sync with a master server’s changes. Unfortunately, replication is perhaps MySQL’s most troublesome feature for some people. This doesn’t have to be the case, and we show you how to ensure that it keeps running well.
Chapter 11, Scaling MySQL discusses what scalability is (it’s not the same thing as performance), why applications and systems don’t scale, and what to do about it. If you do it right, you can scale MySQL to suit nearly any purpose. Chapter 12, High Availability delves into a related-but-distinct topic: how to ensure that MySQL stays up and functions smoothly. In Chapter 13, MySQL in the Cloud, you’ll learn about what’s different when you run MySQL in cloud computing environments.
In Chapter 14, Application-Level Optimization, we explain what we call full-stack optimization—optimization from the frontend to the backend, all the way from the user’s experience to the database.
The best-designed, most scalable architecture in the world is no good if it can’t survive power outages, malicious attacks, application bugs or programmer mistakes, and other disasters. That’s why Chapter 15, Backup and Recovery discusses various backup and recovery strategies for your MySQL databases. These strategies will help minimize your downtime in the event of inevitable hardware failure and ensure that your data survives such catastrophes.
Miscellaneous Useful Topics
In the last chapter and the book’s appendixes, we delve into several topics that either don’t fit well into any of the earlier chapters, or are referenced often enough in multiple chapters that they deserve a bit of special attention.
Chapter 16, Tools for MySQL Users explores some of the open source and commercial tools that can help you manage and monitor your MySQL servers more efficiently.Appendix A introduces the three major unofficial versions of MySQL that have arisen over the last few years, including the one that our company maintains. It’s worth knowing what else is available; many problems that are difficult or intractable with MySQL are solved elegantly by one of the variants. Two of the three (Percona Server and MariaDB) are drop-in replacements, so the effort involved in trying them out is not large. However, we hasten to add that we think most users are well served by sticking with the official MySQL distribution from Oracle. Appendix B shows you how to inspect your MySQL server. Knowing how to get status information from the server is important; knowing what that information means is even more important. We cover SHOW INNODB STATUS in particular detail, because it provides deep insight into the operations of the InnoDB transactional storage engine. There is a lot of discussion of InnoDB’s internals in this appendix.
Appendix C shows you how to copy very large files from place to place efficiently—a must if you are going to manage large volumes of data. Appendix D shows you how to really use and understand the all-important EXPLAIN command. Appendix E shows you how to decipher what’s going on when queries are requesting locks that interfere with each other. And finally, Appendix F is an introduction to Sphinx, a high-performance, full-text indexing system that can complement MySQL’s own abilities.
Software Versions and Availability
MySQL is a moving target. In the years since Jeremy wrote the outline for the first edition of this book, numerous releases of MySQL have appeared. MySQL 4.1 and 5.0 were available only as alpha versions when the first edition went to press, but today MySQL 5.1 and 5.5 are the backbone of many large online applications. As we completed this third edition, MySQL 5.6 was the unreleased bleeding edge.We didn’t rely on a single version of MySQL for this book. Instead, we drew on our extensive collective knowledge of MySQL in the real world. The core of the book is focused on MySQL 5.1 and MySQL 5.5, because those are what we consider the “current” versions. Most of our examples assume you’re running some reasonably mature version of MySQL 5.1, such as MySQL 5.1.50 or newer or newer. We have made an effort to note features or functionalities that might not exist in older releases or that might exist only in the upcoming 5.6 series. However, the definitive reference for map ping features to specific versions is the MySQL documentation itself. We expect that you’ll find yourself visiting the annotated online documentation (http://dev.mysql.com/doc/) from time to time as you read this book.
Another great aspect of MySQL is that it runs on all of today’s popular platforms:Mac OS X, Windows, GNU/Linux, Solaris, FreeBSD, you name it! However, we are biased toward GNU/Linux1 and other Unix-like operating systems. Windows users are likely to encounter some differences. For example, file paths are completely different on Windows. We also refer to standard Unix command-line utilities; we assume you know the corresponding commands in Windows.2 Perl is the other rough spot when dealing with MySQL on Windows. MySQL comes with several useful utilities that are written in Perl, and certain chapters in this book present example Perl scripts that form the basis of more complex tools you’ll build. Percona Toolkit—which is indispensable for administering MySQL—is also written in Perl. However, Perl isn’t included with Windows. In order to use these scripts, you’ll need to download a Windows version of Perl from ActiveState and install the necessary add-on modules (DBI and DBD::mysql) for MySQL access.
Using Code Examples
This book is here to help you get your job done. In general, you may use the code in this book in your programs and documentation. You don’t need to contact us for permission unless you’re reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book doesn’t require permission. Selling or distributing a CD-ROM of examples from O’Reilly books does require permission. Answering a question by citing this book and quoting example code doesn’t require permission. Incorporating a significant amount of example code from this book into your product’s documentation does require permission.
Examples are maintained on the site http://www.highperfmysql.com and will be updated there from time to time. We cannot commit, however, to updating and testing the code for every minor release of MySQL.
We appreciate, but don’t require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: “High Performance MySQL, Third Edition, by Baron Schwartz et al. (O’Reilly). Copyright 2012 Baron Schwartz, Peter Zaitsev, and Vadim Tkachenko, 978-1-449-31428-6.”
If you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at permissions@oreilly.com.
Books Online
Safari Books Online (www.safaribooksonline.com) is an on-demand digital library that delivers expert content in both book and video form from the world’s leading authors in technology and business. Technology professionals, software developers, web designers, and business and creative professionals use Safari Books Online as their primary resource for research, problem solving, learning, and certification training.Safari Books Online offers a range of product mixes and pricing programs for organizations, government agencies, and individuals. Subscribers have access to thousands of books, training videos, and prepublication manuscripts in one fully searchable database from publishers like O’Reilly Media, Prentice Hall Professional, Addison-Wesley Professional, Microsoft Press, Sams, Que, Peachpit Press, Focal Press, Cisco Press, John Wiley & Sons, Syngress, Morgan Kaufmann, IBM Redbooks, Packt, Adobe Press, FTPress, A press, Manning, New Riders, McGraw-Hill, Jones & Bartlett, Course Technology, and dozens more. For more information about Safari Books Online, please visit us online.
CHAPTER 1
MySQL Architecture and History
MySQL is very different from other database servers, and its architectural characteristics make it useful for a wide range of purposes as well as making it a poor choice for others. MySQL is not perfect, but it is flexible enough to work well in very demanding environments, such as web applications. At the same time, MySQL can power embedded applications, data warehouses, content indexing and delivery software, highly available redundant systems, online transaction processing (OLTP), and much more. To get the most from MySQL, you need to understand its design so that you can work with it, not against it. MySQL is flexible in many ways. For example, you can configure it to run well on a wide range of hardware, and it supports a variety of data types. However, MySQL’s most unusual and important feature is its storage-engine architecture, whose design separates query processing and other server tasks from data storage and retrieval. This separation of concerns lets you choose how your data is stored and what performance, features, and other characteristics you want.
This chapter provides a high-level overview of the MySQL server architecture, the major differences between the storage engines, and why those differences are important. We’ll finish with some historical context and benchmarks. We’ve tried to explain MySQL by simplifying the details and showing examples. This discussion will be useful for those new to database servers as well as readers who are experts with other database servers
MySQL’s Logical Architecture
A good mental picture of how MySQL’s components work together will help you understand the server. Figure 1-1 shows a logical view of MySQL’s architecture.
The topmost layer contains the services that aren’t unique to MySQL. They’re services most network-based client/server tools or servers need: connection handling, authentication, security, and so forth.
The second layer is where things get interesting. Much of MySQL’s brains are here,including the code for query parsing, analysis, optimization, caching, and all the built-in functions (e.g., dates, times, math, and encryption). Any functionality provided across storage engines lives at this level: stored procedures, triggers, and views, for example.
The third layer contains the storage engines. They are responsible for storing and retrieving all data stored “in” MySQL. Like the various filesystems available for GNU/Linux, each storage engine has its own benefits and drawbacks. The server communicates with them through the storage engine API. This interface hides differences between storage engines and makes them largely transparent at the query layer. The API contains a couple of dozen low-level functions that perform operations such as “begin a transaction” or “fetch the row that has this primary key.” The storage engines don’t parse SQL1 or communicate with each other; they simply respond to requests from the server.
Connection Management and Security
Each client connection gets its own thread within the server process. The connection’s queries execute within that single thread, which in turn resides on one core or CPU. The server caches threads, so they don’t need to be created and destroyed for each new connection2.
When clients (applications) connect to the MySQL server, the server needs to authenticate them. Authentication is based on username, originating host, and password. X.509 certificates can also be used across an SSL (Secure Sockets Layer) connection. Once a client has connected, the server verifies whether the client has privileges for each query it issues (e.g., whether the client is allowed to issue a SELECT statement that accesses the Country table in the world database).
One exception is InnoDB, which does parse foreign key definitions, because the MySQL server doesn’t yet implement them itself.
MySQL 5.5 and newer versions support an API that can accept thread-pooling plugins, so a small pool of threads can service many connections
Optimization and Execution
MySQL parses queries to create an internal structure (the parse tree), and then applies a variety of optimizations. These can include rewriting the query, determining the order in which it will read tables, choosing which indexes to use, and so on. You can pass hints to the optimizer through special keywords in the query, affecting its decision making process. You can also ask the server to explain various aspects of optimization.
This lets you know what decisions the server is making and gives you a reference point for reworking queries, schemas, and settings to make everything run as efficiently as possible. We discuss the optimizer in much more detail in Chapter 6.
The optimizer does not really care what storage engine a particular table uses, but the storage engine does affect how the server optimizes the query. The optimizer asks the storage engine about some of its capabilities and the cost of certain operations, and for statistics on the table data. For instance, some storage engines support index types that can be helpful to certain queries. You can read more about indexing and schema optimization in Chapter 4 and Chapter 5. Before even parsing the query, though, the server consults the query cache, which can store only SELECT statements, along with their result sets. If anyone issues a query that’s identical to one already in the cache, the server doesn’t need to parse, optimize, or execute the query at all—it can simply pass back the stored result set. We write more about that in Chapter 7
优化器实际上并不关心特定表使用的是哪种存储引擎,但存储引擎确实会影响服务器如何优化查询。优化器会向存储引擎询问其某些功能、特定操作的成本以及表数据的统计信息。例如,某些存储引擎支持的索引类型可能对某些查询非常有用。关于索引和模式优化的更多内容,可以参阅第 4 章和第 5 章。
不过,在解析查询之前,服务器会先检查查询缓存。查询缓存只能存储 SELECT 语句及其结果集。如果有人发出的查询与缓存中已有的某条查询完全相同,服务器就不需要解析、优化或执行该查询,而是直接返回缓存的结果集。我们在第 7 章中对此进行了更多的讨论。
Concurrency Control
Anytime more than one query needs to change data at the same time, the problem of concurrency control arises. For our purposes in this chapter, MySQL has to do this at two levels: the server level and the storage engine level. Concurrency control is a big topic to which a large body of theoretical literature is devoted, so we will just give you a simplified overview of how MySQL deals with concurrent readers and writers, so you have the context you need for the rest of this chapter. We’ll use an email box on a Unix system as an example. The classic mbox file format is very simple. All the messages in an mbox mailbox are concatenated together, one after another. This makes it very easy to read and parse mail messages. It also makes mail delivery easy: just append a new message to the end of the file. But what happens when two processes try to deliver messages at the same time to the same mailbox? Clearly that could corrupt the mailbox, leaving two interleaved messages at the end of the mailbox file. Well-behaved mail delivery systems use locking to prevent corruption. If a client attempts a second delivery while the mailbox is locked ,it must wait to acquire the lock itself before delivering its message.This scheme works reasonably well in practice, but it gives no support for concurrency.Because only a single process can change the mailbox at any given time, this approach becomes problematic with a high-volume mailbox.
同一时刻超过一个请求需要改变数据的时候, 并发问题就会出现。mysql 需要从两个层面解决问题 ……(后面的读懂了,,进入状态了)
Read/Write Locks
Reading from the mailbox isn’t as troublesome. There’s nothing wrong with multiple clients reading the same mailbox simultaneously; because they aren’t making changes, nothing is likely to go wrong. But what happens if someone tries to delete message number 25 while programs are reading the mailbox? It depends, but a reader could come away with a corrupted or inconsistent view of the mailbox. So, to be safe, even reading from a mailbox requires special care. If you think of the mailbox as a database table and each mail message as a row, it’s easy to see that the problem is the same in this context. In many ways, a mailbox is really just a simple database table. Modifying rows in a database table is very similar to removing or changing the content of messages in a mailbox file.
The solution to this classic problem of concurrency control is rather simple. Systems that deal with concurrent read/write access typically implement a locking system that consists of two lock types. These locks are usually known as shared locks and exclusive locks, or read locks and write locks.
- 两种锁, shared locks and exclusive locks (read locks and write locks )
Without worrying about the actual locking technology, we can describe the concept as follows. Read locks on a resource are shared, or mutually nonblocking: many clients can read from a resource at the same time and not interfere with each other. Write locks, on the other hand, are exclusive—i.e., they block both read locks and other write locks—because the only safe policy is to have a single client writing to the resource at a given time and to prevent all reads when a client is writing. In the database world, locking happens all the time: MySQL has to prevent one client from reading a piece of data while another is changing it. It performs this lock management internally in a way that is transparent much of the time.
Lock Granularity
One way to improve the concurrency of a shared resource is to be more selective about what you lock. Rather than locking the entire resource, lock only the part that contains the data you need to change. Better yet, lock only the exact piece of data you plan to change. Minimizing the amount of data that you lock at any one time lets changes to a given resource occur simultaneously, as long as they don’t conflict with each other. The problem is locks consume resources. Every lock operation—getting a lock, checking to see whether a lock is free, releasing a lock, and so on—has overhead. If the system spends too much time managing locks instead of storing and retrieving data, performance can suffer.
A locking strategy is a compromise between lock overhead and data safety, and that compromise affects performance. Most commercial database servers don’t give you much choice: you get what is known as row-level locking in your tables, with a variety of often complex ways to give good performance with many locks.
MySQL, on the other hand, does offer choices. Its storage engines can implement their own locking policies and lock granularities. Lock management is a very important decision in storage engine design; fixing the granularity at a certain level can give better performance for certain uses, yet make that engine less suited for other purposes. Because MySQL offers multiple storage engines, it doesn’t require a single general purpose solution. Let’s have a look at the two most important lock strategies.
Table locks
The most basic locking strategy available in MySQL, and the one with the lowest over head, is table locks. A table lock is analogous to the mailbox locks described earlier: it locks the entire table. When a client wishes to write to a table (insert, delete, update,etc.), it acquires a write lock. This keeps all other read and write operations at bay. When nobody is writing, readers can obtain read locks, which don’t conflict with other read locks.
Table locks have variations for good performance in specific situations. For example, READ LOCAL table locks allow some types of concurrent write operations. Write locks also have a higher priority than read locks, so a request for a write lock will advance to the front of the lock queue even if readers are already in the queue (write locks can advance past read locks in the queue, but read locks cannot advance past write locks). Although storage engines can manage their own locks, MySQL itself also uses a variety of locks that are effectively table-level for various purposes. For instance, the server uses a table-level lock for statements such as ALTER TABLE, regardless of the storage engine.
Row locks
The locking style that offers the greatest concurrency (and carries the greatest overhead) is the use of row locks. Row-level locking, as this strategy is commonly known, is available in the InnoDB and XtraDB storage engines, among others. Row locks are implemented in the storage engine, not the server (refer back to the logical architecture diagram if you need to). The server is completely unaware of locks implemented in the storage engines, and as you’ll see later in this chapter and throughout the book, the storage engines all implement locking in their own ways.
Transactions
You can’t examine the more advanced features of a database system for very long before transactions enter the mix. A transaction is a group of SQL queries that are treated atomically, as a single unit of work. If the database engine can apply the entire group of queries to a database, it does so, but if any of them can’t be done because of a crash or other reason, none of them is applied. It’s all or nothing. Little of this section is specific to MySQL. If you’re already familiar with ACID transactions, feel free to skip ahead to “Transactions in MySQL” on page 10.
A banking application is the classic example of why transactions are necessary. Imagine a bank’s database with two tables: checking and savings. To move $200 from Jane’s checking account to her savings account, you need to perform at least three steps:
- Make sure her checking account balance is greater than $200.
- Subtract $200 from her checking account balance.
- Add $200 to her savings account balance.
The entire operation should be wrapped in a transaction so that if any one of the steps fails, any completed steps can be rolled back.
You start a transaction with the START TRANSACTION statement and then either make its changes permanent with COMMIT or discard the changes with ROLLBACK. So, the SQL for
our sample transaction might look like this:
1 START TRANSACTION;2 SELECT balance FROM checking WHERE customer_id = 10233276;3 UPDATE checking SET balance = balance - 200.00 WHERE customer_id = 10233276;4 UPDATE savings SET balance = balance + 200.00 WHERE customer_id = 10233276;5 COMMIT;
But transactions alone aren’t the whole story. What happens if the database server crashes while performing line 4? Who knows? The customer probably just lost $200.
And what if another process comes along between lines 3 and 4 and removes the entire checking account balance? The bank has given the customer a $200 credit without even knowing it.
Transactions aren’t enough unless the system passes the ACID test. ACID stands for Atomicity, Consistency, Isolation, and Durability. These are tightly related criteria that a well-behaved transaction processing system must meet:
Atomicity
A transaction must function as a single indivisible unit of work so that the entire transaction is either applied or rolled back. When transactions are atomic, there is no such thing as a partially completed transaction: it’s all or nothing.
Consistency
The database should always move from one consistent state to the next. In our example, consistency ensures that a crash between lines 3 and 4 doesn’t result in $200 disappearing from the checking account. Because the transaction is never committed, none of the transaction’s changes are ever reflected in the database.
Isolation
The results of a transaction are usually invisible to other transactions until the transaction is complete. This ensures that if a bank account summary runs after line 3 but before line 4 in our example, it will still see the $200 in the checking account. When we discuss isolation levels, you’ll understand why we said usually invisible.
Durability
Once committed, a transaction’s changes are permanent. This means the changes must be recorded such that data won’t be lost in a system crash. Durability is a slightly fuzzy concept, however, because there are actually many levels. Some durability strategies provide a stronger safety guarantee than others, and nothing is ever 100% durable (if the database itself were truly durable, then how could back ups increase durability?). We discuss what durability really means in MySQL in later chapters.
ACID transactions ensure that banks don’t lose your money. It is generally extremely difficult or impossible to do this with application logic. An ACID-compliant database server has to do all sorts of complicated things you might not realize to provide ACID guarantees.
Just as with increased lock granularity, the downside of this extra security is that the database server has to do more work. A database server with ACID transactions also generally requires more CPU power, memory, and disk space than one without them.
As we’ve said several times, this is where MySQL’s storage engine architecture works to your advantage. You can decide whether your application needs transactions. If you don’t really need them, you might be able to get higher performance with a nontransactional storage engine for some kinds of queries. You might be able to use LOCK TABLES to give the level of protection you need without transactions. It’s all up to you.
Isolation Levels
Isolation is more complex than it looks. The SQL standard defines four isolation levels, with specific rules for which changes are and aren’t visible inside and outside a transaction. Lower isolation levels typically allow higher concurrency and have lower overhead.
Each storage engine implements isolation levels slightly differently, and they don’t necessarily match what you might expect if you’re used to another database product (thus, we won’t go into exhaustive detail in this section). You should read the manuals for whichever storage engines you decide to use.
Let’s take a quick look at the four isolation levels:
READ UNCOMMITTED
In the READ UNCOMMITTED isolation level, transactions can view the results of uncommitted transactions. At this level, many problems can occur unless you really,really know what you are doing and have a good reason for doing it. This level is rarely used in practice, because its performance isn’t much better than the other levels, which have many advantages. Reading uncommitted data is also known as a dirty read.
READ COMMITTED
The default isolation level for most database systems (but not MySQL!) is READ COMMITTED. It satisfies the simple definition of isolation used earlier: a transaction will see only those changes made by transactions that were already committed when it began, and its changes won’t be visible to others until it has committed.This level still allows what’s known as a nonrepeatable read. This means you can run the same statement twice and see different data.
REPEATABLE READ
REPEATABLE READ solves the problems that READ UNCOMMITTED allows. It guarantees that any rows a transaction reads will “look the same” in subsequent reads within the same transaction, but in theory it still allows another tricky problem: phantom reads. Simply put, a phantom read can happen when you select some range of rows, another transaction inserts a new row into the range, and then you select the same range again; you will then see the new “phantom” row. InnoDB and XtraDB solve the phantom read problem with multiversion concurrency control, which we explain later in this chapter. REPEATABLE READ is MySQL’s default transaction isolation level.
SERIALIZABLE
The highest level of isolation, SERIALIZABLE, solves the phantom read problem by forcing transactions to be ordered so that they can’t possibly conflict. In a nutshell, SERIALIZABLE places a lock on every row it reads. At this level, a lot of timeouts and lock contention can occur. We’ve rarely seen people use this isolation level, but your application’s needs might force you to accept the decreased concurrency in favor of the data stability that results.
Table 1-1 summarizes the various isolation levels and the drawbacks associated with each one.
Isolation level | Dirty reads possible | Nonrepeatable reads possible | Phantom reads possible | Locking reads |
---|---|---|---|---|
READ UNCOMMITTED | Yes | Yes | Yes | No |
READ COMMITTED | No | Yes | Yes | No |
REPEATABLE READ | No | No | Yes | No |
SERIALIZABLE | No | No | No | Yes |
Deadlocks
A deadlock is when two or more transactions are mutually holding and requesting locks on the same resources, creating a cycle of dependencies. Deadlocks occur when transactions try to lock resources in a different order. They can happen whenever multiple transactions lock the same resources. For example, consider these two transactions running against the StockPrice table:
Transaction #1
START TRANSACTION;
UPDATE StockPrice SET close = 45.50 WHERE stock_id = 4 and date = '2002-05-01';
UPDATE StockPrice SET close = 19.80 WHERE stock_id = 3 and date = '2002-05-02';
COMMIT;
Transaction #2
START TRANSACTION;
UPDATE StockPrice SET high = 20.12 WHERE stock_id = 3 and date = '2002-05-02';
UPDATE StockPrice SET high = 47.20 WHERE stock_id = 4 and date = '2002-05-01';
COMMIT;
If you’re unlucky, each transaction will execute its first query and update a row of data, locking it in the process. Each transaction will then attempt to update its second row, only to find that it is already locked. The two transactions will wait forever for each other to complete, unless something intervenes to break the deadlock.To combat this problem, database systems implement various forms of deadlock detection and timeouts. The more sophisticated systems, such as the InnoDB storage engine, will notice circular dependencies and return an error instantly. This can be a good thing—otherwise, deadlocks would manifest themselves as very slow queries. Others will give up after the query exceeds a lock wait timeout, which is not always good. The way InnoDB currently handles deadlocks is to roll back the transaction that has the fewest exclusive row locks (an approximate metric for which will be the easiest to roll back). Lock behavior and order are storage engine–specific, so some storage engines might deadlock on a certain sequence of statements even though others won’t. Deadlocks have a dual nature: some are unavoidable because of true data conflicts, and some are caused by how a storage engine works.
Deadlocks cannot be broken without rolling back one of the transactions, either partially or wholly. They are a fact of life in transactional systems, and your applications should be designed to handle them. Many applications can simply retry their transactions from the beginning.
如果运气不好,每个事务将执行其第一个查询并更新一行数据,在此过程中锁定该行。然后,每个事务将尝试更新第二行,但会发现它已被锁定。这两个事务将永远等待对方完成,除非某种机制介入打破死锁。为了解决这个问题,数据库系统实现了各种形式的死锁检测和超时处理。更复杂的系统,如InnoDB存储引擎,会立即检测到循环依赖并返回错误。这可能是好事——否则,死锁将表现为非常慢的查询。其他系统则会在查询超时后放弃,这并不总是好事。InnoDB目前处理死锁的方式是回滚拥有最少独占行锁的事务(一个大致的衡量标准是哪个事务最容易回滚)。锁行为和顺序是特定于存储引擎的,因此某些存储引擎可能会在某些语句序列上发生死锁,而其他存储引擎则不会。死锁具有双重性质:有些是由于真正的数据冲突不可避免的,而有些则是由于存储引擎的工作方式导致的。
死锁无法打破,必须回滚其中一个事务,无论是部分回滚还是完全回滚。死锁是事务系统中的一个事实,您的应用程序应设计为能够处理它们。许多应用程序可以简单地从头开始重试其事务。
Transaction Logging
Transaction logging helps make transactions more efficient. Instead of updating the tables on disk each time a change occurs, the storage engine can change its in-memory copy of the data. This is very fast. The storage engine can then write a record of the change to the transaction log, which is on disk and therefore durable. This is also a relatively fast operation, because appending log events involves sequential I/O in one small area of the disk instead of random I/O in many places. Then, at some later time, a process can update the table on disk. Thus, most storage engines that use this technique (known as write-ahead logging) end up writing the changes to disk twice.If there’s a crash after the update is written to the transaction log but before the changes are made to the data itself, the storage engine can still recover the changes upon restart.
The recovery method varies between storage engines.
Transactions in MySQL
MySQL provides two transactional storage engines: InnoDB and NDB Cluster. Several third-party engines are also available; the best-known engines right now are XtraDB and PBXT. We discuss some specific properties of each engine in the next section
AUTOCOMMIT
MySQL operates in AUTOCOMMIT mode by default. This means that unless you’ve explicitly begun a transaction, it automatically executes each query in a separate transaction. You can enable or disable AUTOCOMMIT for the current connection by setting a variable:
mysql> **SHOW VARIABLES LIKE 'AUTOCOMMIT';**+---------------+-------+| Variable_name | Value |+---------------+-------+| autocommit | ON |+---------------+-------+1 row in set (0.00 sec)mysql> **SET AUTOCOMMIT = 1;**
The values 1 and ON are equivalent, as are 0 and OFF. When you run with AUTOCOMMIT =0, you are always in a transaction, until you issue a COMMIT or ROLLBACK. MySQL then starts a new transaction immediately. Changing the value of AUTOCOMMIT has no effect on nontransactional tables, such as MyISAM or Memory tables, which have no notion of committing or rolling back changes.
Mixing storage engines in transactions
MySQL doesn’t manage transactions at the server level. Instead, the underlying storage engines implement transactions themselves. This means you can’t reliably mix different engines in a single transaction.
If you mix transactional and nontransactional tables (for instance, InnoDB and MyISAM tables) in a transaction, the transaction will work properly if all goes well.However, if a rollback is required, the changes to the nontransactional table can’t be undone. This leaves the database in an inconsistent state from which it might be difficult to recover and renders the entire point of transactions moot. This is why it is really important to pick the right storage engine for each table. MySQL will not usually warn you or raise errors if you do transactional operations on a nontransactional table. Sometimes rolling back a transaction will generate the warning “Some nontransactional changed tables couldn’t be rolled back,” but most of the time, you’ll have no indication you’re working with nontransactional tables.
Implicit and explicit locking
InnoDB uses a two-phase locking protocol. It can acquire locks at any time during a transaction, but it does not release them until a COMMIT or ROLLBACK. It releases all the locks at the same time. The locking mechanisms described earlier are all implicit. InnoDB handles locks automatically, according to your isolation level. However, InnoDB also supports explicit locking, which the SQL standard does not mention at all:3
• SELECT … LOCK IN SHARE MODE
• SELECT … FOR UPDATE
MySQL also supports the LOCK TABLES and UNLOCK TABLES commands, which are implemented in the server, not in the storage engines. These have their uses, but they are not a substitute for transactions. If you need transactions, use a transactional storage engine.We often see applications that have been converted from MyISAM to InnoDB but are still using LOCK TABLES. This is no longer necessary because of row-level locking, and it can cause severe performance problems.
The interaction between LOCK TABLES and transactions is complex, and
there are unexpected behaviors in some server versions. Therefore, we
recommend that you never use LOCK TABLES unless you are in a trans
action and AUTOCOMMIT is disabled, no matter what storage engine you
are using.
相关文章:
高性能mysql 1
高性能mysql 1 参考: 博客 https://segmentfault.com/a/1190000040374142书籍📚’High performance mysql’ I note some hard part ,translating it into Chinese for a better comprehensionsometimes when I have some trouble with reading En…...
QT发布ArcGIS QML项目时遇到的问题
在打包 ArcGIS Runtime SDK for Qt 项目时,如果项目中没有正确显示地图或者图层,且在项目的 DLL 依赖中没有找到与 ArcGIS SDK 相关的依赖,可能是由于以下几种原因导致的: 1. 未正确配置 ArcGIS SDK 的依赖 ArcGIS Runtime SDK …...
高校数字化运营平台解决方案:构建统一的服务大厅、业务平台、办公平台,助力打造智慧校园
教育数字化是建设教育强国的重要基础,利用技术和数据助推高校管理转型,从而更好地支撑教学业务开展。 近年来,国家多次发布政策,驱动教育行业的数字化转型。《“十四五”国家信息化规划》,推进信息技术、智能技术与教育…...
cocotb value cocotb—基础语法对照篇
cocotb—基础语法对照篇 import cocotb from cocotb.triggers import Timer from adder_model import adder_model from cocotb.clock import Clock from cocotb.triggers import RisingEdge import randomcocotb.test() async def adder_basic_test(dut):"""Te…...
LLM与动态符号执行生成测试用例的比较
LLM与动态符号执行生成测试用例的比较 在软件测试领域,生成有效的测试用例是确保软件质量和可靠性的关键步骤。近年来,大型语言模型(Large Language Models,LLM)和动态符号执行(Dynamic Symbolic Executio…...
torchvison.models中包含的哪些模型?
1.模型 Alexnet AlexNet 是一个具有 8 层的深度卷积神经网络,结构上比早期的卷积神经网络(如 LeNet)要深得多。它由 5 个卷积层(conv layers)和 3 个全连接层(fully connected layers)组成。Al…...
安装v2x,使用docker安装deepstream,安装v2x步骤,并使用tritonServer进行推理步骤,以及避坑问题
1,安装步骤 大致分为下面的安装过程: a 安装docker,b 本地安装环境,c 拉取docker镜像,d,本地下载数据 e,移动数据到docker下目录,f,docker下解压数据,g,docker下engine化数据,h,docker下编译v2x并运行离线数据,r,rtsp数据流替换并运行 To install these packages…...
2022 年 6 月青少年软编等考 C 语言三级真题解析
目录 T1. 制作蛋糕思路分析T2. 找和最接近但不超过K的两个元素思路分析T3. 数根思路分析T4. 迷信的病人思路分析T5. 算 24思路分析T1. 制作蛋糕 小 A 擅长制作香蕉蛋糕和巧克力蛋糕。制作一个香蕉蛋糕需要 2 2 2 个单位的香蕉, 250 250 250 个单位的面粉, 75 75 75 个单位的…...
java opcua server服务端开发 设置用户密码访问
前言 关于使用milo开源库,开发opc ua服务器,之前的教程中,都是使用的匿名访问,有网友咨询如何设置服务端使用用户密码访问,于是我完善了这部分的空缺整理整了这篇教程,希望能解决有同样需求,但是遇到困难的网友!因为milo没有官方文档的教程且网上详细的教程很少,本人…...
SQLite:DDL(数据定义语言)的基本用法
SQLite:DDL(数据定义语言)的基本用法 1 主要内容说明2 相关内容说明2.1 创建表格(create table)2.1.1 SQLite常见的数据类型2.1.1.1 integer(整型)2.1.1.2 text(文本型)2…...
Spring-Smart-DI !动态切换实现类框架
背景 一般我们系统同一个功能可能会对接多个服务商,防止某个服务商的服务不可用快速切换或者收费不同需要切换,那我们一般做快速切换逻辑传统无非就是先将每个服务商实现,然后在配置点(数据库或者nacos)配置当前正在使…...
【SCT71401】3V-40V Vin, 150mA, 2.5uA IQ,低压稳压器,替代SGM2203
SCT71401 3V-40V Vin, 150mA, 2.5uA IQ,低压稳压器,替代SGM2203 描述 SCT71401系列产品是一款低压差线性稳压器,设计用于3 V至40 V (45V瞬态输入电压)的宽输入电压范围和150mA输出电流。SCT71401系列产品使用3.3uF…...
浅谈网络 | 应用层之流媒体与P2P协议
目录 流媒体名词系列视频的本质视频压缩编码过程如何在直播中看到帅哥美女?RTMP 协议 P2PP2P 文件下载种子文件 (.torrent)去中心化网络(DHT)哈希值与 DHT 网络DHT 网络是如何查找 流媒体 直播系统组成与协议 近几年直播比较火,…...
Brain.js(六):构建FNN神经网络实战教程 - 用户喜好预测
在前文不同的神经网络类型和对比 针对不同的神经网络类型做了对比,本章将对FNN稍作展开 测试环境: chrome 版本 131.0.6778.86(正式版本) (64 位) 一、引言 Brain.js 是一个简单易用的 JavaScript 神经网…...
重学设计模式-建造者模式
本文介绍一下建造者模式,相对于工厂模式来说,建造者模式更为简单,且用的更少 定义 建造者模式是一种创建型设计模式,它使用多个简单的对象一步一步构建成一个复杂的对象。这种模式的主要目的是将一个复杂对象的构建过程与其表示…...
linux下c++调用opencv3.4.16实战技巧
目录 参考:在图像上绘框在图像上绘圆在图像上绘文字在图像上绘制线灰度图rgb转yuvOpenCV 读取视频,设置起始帧、结束帧及帧率获取(1.1)简介(1.2)Mat类型(1.3)IplImage类型将OpenCV抓拍的图片进行x264编码并保存到文件c++调用opencv,读取rtsp视频流参考: https://blog…...
记录css模糊程度的属性
记录需要模糊以及透明化图片需求: opacity: (0到1之间数字,dom透明程度)。 filter: blur() 括号里需数字,单位为px,值越大模糊程度越大。 关于css中filter属性记录 filter 滤镜属性: blur&…...
K8S的监控与告警配置有哪些最佳实践
在 Kubernetes (K8s) 集群中实现有效的监控与告警是确保集群稳定性、性能以及及时响应潜在问题的关键。以下是 K8s 监控与告警配置的最佳实践,涵盖了监控工具的选择、告警规则的配置、数据存储与可视化等方面。 1. 选择合适的监控工具 Kubernetes 生态系统有多种监…...
如何在Ubuntu 20.04上安装和使用PostgreSQL:一步步指南
如何在Ubuntu 20.04上安装和使用PostgreSQL:一步步指南 在Ubuntu 20.04上安装和使用PostgreSQL数据库包括几个明确的步骤:安装、配置、创建用户和数据库、以及基本的数据库操作。下面,我将详细解释每个步骤,并提供具体的命令行示…...
PostGis学习笔记
– 文本方式查看几何数据 SELECT ST_AsText(geom)FROM nyc_streets WHERE name ‘Avenue O’; – 计算紧邻的街区 SELECT name,ST_GeometryType(geom) FROM nyc_streets WHERE ST_DWithin( geom,ST_GeomFromText(‘LINESTRING(586782 4504202,586864 4504216)’,26918),0.1); …...
JDK17 线程池 ThreadPoolExecutor
文章目录 线程池ThreadPoolExecutor状态向线程池添加任务 executeWorker线程池新建工作线程 addWorker 拒绝策略 线程池 线程池将创建线程和使用线程解耦。优点是 避免重复创建和销毁线程,降低资源消耗。任务不用等待创建线程的时间,提高响应速度。统一…...
Dify+Docker
1. 获取代码 直接下载 (1)访问 langgenius/dify: Dify is an open-source LLM app development platform. Difys intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, …...
分布式会话 详解
分布式会话详解 在分布式系统中,用户的会话状态需要在多个服务器或节点之间共享或存储。分布式会话指的是在这种场景下如何管理和存储会话,以便在多个节点上都能正确识别用户状态,从而保证用户体验的一致性。 1. 为什么需要分布式会话 在单…...
Java进阶
Java进阶 java注解 java中注解(Annotation),又称为java标注,是一种特殊的注释,可以添加在包,类,成员变量,方法,参数等内容上面.注解会随同代码编译到字节码文件中,在运行时,可以通过反射机制获取到类中的注解,然后根据不同的注解进行相应的解析. 内置注解 Java 语言中已经定…...
Qt/C++实现帧同步播放器/硬解码GPU绘制/超低资源占用/支持8K16K/支持win/linux/mac/嵌入式/国产OS等
一、前言 首先泼一盆冷水,在不同的电脑上实现完完全全的帧同步理论上是不可能的,市面上所有号称帧同步的播放器,同一台电脑不同拼接视频可以通过合并成一张图片来绘制实现完完全全的帧同步,不同电脑,受限于网络的延迟…...
hhdb数据库介绍(10-33)
管理 数据归档 归档记录查询 功能入口:“管理->数据归档->归档记录查询” 需要确保配置的归档用户对数据归档规则所在的逻辑库具备CREATE权限,以及对原数据表具有所有权限。 清理归档数据 (一)功能入口:“…...
UE4_材质节点_有关距离的_流体模拟
一、材质节点介绍: 特别注意:距离场需要独立显卡支持。 1、什么是距离场? 想象一下空间中只有两个实体, 一个球,一个圆柱. 空间由无数个点组成, 取其中任何一个点, 比如,它跟球面的最近距离是3, 跟圆柱面的最近距离是2, 那么这个点的值就…...
SpringBoot集成 SpringDoc (SpringFox 和 Swagger 的升级版)
阅读 SpringDoc 官网 - Migrating from SpringFox 只需要导入以下一个依赖即可: <dependency><groupId>org.springdoc</groupId><artifactId>springdoc-openapi-starter-webmvc-ui</artifactId><version>2.7.0</version>…...
分布式推理框架 xDit
1. xDiT 简介 xDiT 是一个为大规模多 GPU 集群上的 Diffusion Transformers(DiTs)设计的可扩展推理引擎。它提供了一套高效的并行方法和 GPU 内核加速技术,以满足实时推理需求。 1.1 DiT 和 LLM DiT(Diffusion Transformers&am…...
《Vue零基础入门教程》第十七课:侦听器
往期内容 《Vue零基础入门教程》第九课:插值语法细节 《Vue零基础入门教程》第十课:属性绑定指令 《Vue零基础入门教程》第十一课:事件绑定指令 《Vue零基础入门教程》第十二课:双向绑定指令 《Vue零基础入门教程》第十三课&…...
【人工智能-基础】SVM中的核函数到底是什么
文章目录 支持向量机(SVM)中的核函数详解1. 什么是核函数?核函数的作用:2. 核技巧:从低维到高维的映射3. 常见的核函数类型3.1 线性核函数3.2 多项式核函数3.3 高斯径向基函数(RBF核)4. 总结支持向量机(SVM)中的核函数详解 支持向量机(SVM,Support Vector Machine)…...
RoBERTa- 稳健优化的 BERT 预训练模型详解
一、引言 自 BERT(Bidirectional Encoder Representations from Transformers)问世,预训练语言模型在自然语言处理(NLP)领域掀起革命浪潮,凭卓越表现大幅刷新诸多任务成绩。RoBERTa 承继 BERT 架构&#x…...
20.(开发工具篇github)Git上次超过100M单文件
1:安装lfs git lfs install 2: 撤销所有更改(包括未暂存的更改) git reset --hard 3:查找大于100M的文件 find ./ -size 100M 4:加入到 track git lfs track “./data/geo_tif_zzjg/2023年_种植结构影像.tif” git lfs track “./data/geo_tif_zz…...
Redis使用场景-缓存-缓存击穿
前言 之前在针对实习面试的博文中讲到Redis在实际开发中的生产问题,其中缓存穿透、击穿、雪崩在面试中问的最频繁,本文加了图解,希望帮助你更直观的了解缓存击穿😀 (放出之前写的针对实习面试的关于Redis生产问题的博…...
uniapp Electron打包生成桌面应用exe文件
1.uniapp Electron打包生成桌面应用exe文件 随着跨平台开发的需求日益增长,UniApp 成为了开发者们的首选之一。通过 UniApp,你可以使用 Vue.js 的语法结构和组件系统来构建原生应用、Web 应用甚至是桌面应用。本文将详细介绍如何使用 UniApp 将你的项目打包成 Windows 桌面端…...
【机器学习】Sigmoid函数在深层神经网络中存在梯度消失问题,如何设计一种改进的Sigmoid激活函数,既能保持其概率预测优势,又能避免梯度消失?
为了解决 Sigmoid 函数在深层神经网络中的梯度消失问题,可以设计一种改进的 Sigmoid 激活函数,使其同时具备以下特性: 减缓梯度消失问题:避免在输入值远离零时梯度趋于零的问题。保持概率预测能力:保留 Sigmoid 的单调…...
SpringBoot中实现EasyExcel实现动态表头导入(完整版)
前言 最近在写项目的时候有一个需求,就是实现动态表头的导入,那时候我自己也不知道动态表头导入是什么,查询了大量的网站和资料,终于了解了动态表头导入是什么。 一、准备工作 确保项目中引入了处理 Excel 文件的相关库ÿ…...
前端用到的一些框架
拖拽框架:Vue.Draggable Vue.Draggable是一款基于Sortable.js拖拽插件 官网:https://github.com/SortableJS/Vue.Draggable 分屏插件:fullPage.js fullPage.js 是一个基于 jQuery 的插件,它能够很方便、很轻松的制作出全屏网站…...
“量子跃迁与数据织网:深入探索K最近邻算法在高维空间中的优化路径、神经网络融合技术及未来机器学习生态系统的构建“
🎼个人主页:【Y小夜】 😎作者简介:一位双非学校的大二学生,编程爱好者, 专注于基础和实战分享,欢迎私信咨询! 🎆入门专栏:🎇【MySQL࿰…...
10个Word自动化办公脚本
在日常工作和学习中,我们常常需要处理Word文档(.docx)。 Python提供了强大的库,如python-docx,使我们能够轻松地进行文档创建、编辑和格式化等操作。本文将分享10个使用Python编写的Word自动化脚本,帮助新…...
【青牛科技】D35摄氏温度传感器芯片,低功耗,静态工作电流小于60 μA
概述: D35是基于模拟电路的一种基本摄氏温度传感器,其作用是将感测的环境温度/物体温度精确的以电压的形式输出,且输出电压与摄氏温度成线性正比关系,转换公式为Vo0 10 mV / ℃*T(℃),0C时输出为…...
无分类编址的IPv4地址
/20含义:前20比特位为网络号,后面32-2012为主机号 路由聚合:找共同前缀 所有可分配地址的主机都能接收广播地址,...
LeetCode - #150 逆波兰表达式求值
文章目录 前言1. 描述2. 示例3. 答案关于我们 前言 我们社区陆续会将顾毅(Netflix 增长黑客,《iOS 面试之道》作者,ACE 职业健身教练。)的 Swift 算法题题解整理为文字版以方便大家学习与阅读。 LeetCode 算法到目前我们已经更新…...
如何避免数据丢失:服务器恢复与预防策略
在当今数字时代,数据对于个人和企业来说都至关重要。数据丢失可能会导致严重的财务损失、业务中断甚至法律责任。因此,采取措施防止数据丢失至关重要。本文将讨论服务器数据丢失的常见原因以及如何防止数据丢失的有效策略。 服务器数据丢失的常见原因 服…...
pytorch中model.eval的理解
在复现simsam的过程中,看到在线性评估部分的训练函数中设置了model.eval,不太理解,印象中一直觉得,model.eval会影响梯度的回传,这里来拨乱反正一下。 事实上,model.eval()主要影响 BatchNorm 和 Dropout 层的行为&am…...
【AI+教育】一些记录@2024.11.19-11.25
通向AGI之路:大型语言模型(LLM)技术精要 https://zhuanlan.zhihu.com/p/597586623 在Bert和GPT模型出现之前,NLP领域流行的技术是深度学习模型,而NLP领域的深度学习,主要依托于以下几项关键技术࿱…...
CSS变量用法及实践
目录 一、基本用法 1.1、定义变量 1.2、使用变量 1.3 、修改变量的值 二、命名规范 2.1、使用有意义的名称 2.2、使用命名空间 三、变量值类型 3.1、如果变量值是一个字符串,可以与其他字符串拼接,例如: 3.2、 如果变量值是数值&a…...
【Python网络爬虫笔记】8- (BeautifulSoup)抓取电影天堂2024年最新电影,并保存所有电影名称和链接
目录 一. BeautifulSoup的作用二. 核心方法介绍2.1 构造函数2.2 find()方法2.3 find_all()方法2.4 select()方法 三. 网络爬虫中使用BeautifulSoup四、案例爬取结果 一. BeautifulSoup的作用 解析HTML/XML文档:它可以将复杂的HTML或XML文本转换为易于操作的树形结构…...
STM32 ADC --- 知识点总结
STM32 ADC — 知识点总结 文章目录 STM32 ADC --- 知识点总结cubeMX中配置注解单次转换模式、连续转换模式、扫描模式单通道采样的情况单次转换模式:连续转换模式: 多通道采样的情况禁止扫描模式(单次转换模式或连续转换模式)单次…...
使用PHP脚本实现GitHub API搜索与数据库同步
在现代软件开发中,自动化数据收集和同步是提高效率的关键。今天,我将分享一个我最近开发的PHP脚本,它能够自动从GitHub API搜索特定关键词的仓库,并将这些数据同步到MySQL数据库中。这个过程不仅涉及到API调用和数据处理ÿ…...