分享

[Bernstein09] 9.4. Single-Master Primary-Copy Replication

 Stefen 2010-06-24

9.4. Single-Master Primary-Copy Replication

Normal Operation

The most straightforward, and often pragmatic, approach to replication is to designate one replica as the primary copy and to allow update transactions to read and write only that replica. This is the primary-backup technique illustrated in Figure 9.1. Updates on the primary are distributed to other replicas, called secondaries, in the order in which they executed at the primary and are applied to secondaries in that order (see Figure 9.7). Thus, all replicas process the same stream of updates in the same order. In between any two update transactions, a replica can process a local query.

Figure 9.7. Propagating Updates from Primary to Secondaries. Transactions update data only at the primary replica. The primary propagates updates to the secondary replicas. The secondaries can process local queries.


One way to propagate updates is by synchronous replication. For example, in a relational database system, one could define an SQL trigger on the primary table that remotely updates secondary copies of the table within the context of the user’s update transaction. This implies that updates are propagated right away, which may delay the completion of the transaction. It also means that administrators cannot control when updates are applied to replicas. For example, in some decision support systems, it is desirable to apply updates at fixed times, so the database remains unchanged when certain analysis work is in progress.

Currently, the more popular approach is asynchronous replication, where updates to the primary generate a stream of updates to the secondaries, which is processed after transactions on the primary commit. For database systems, the stream of updates is often a log. The log reflects the exact order of the updates that were performed at the primary, so the updates can be applied directly to each secondary as they arrive.

One application of primary-copy replication is database mirroring, where there are only two replicas, one primary and one secondary. This is a hot backup technique for high availability. Among the first general-purpose systems to do this were IBM’s IMS/XRF and Tandem’s (now HP’s) Non-Stop SQL database systems. Now most database systems offer a database mirroring feature.

With database mirroring, the primary sends its database log to the secondary. The secondary continually runs log-based recovery, so that its database state is very close to that of the primary. If the primary fails, the secondary just needs to finish processing the tail of the log it received before the primary failed, after which it can take over as primary. If synchronous replication is used, then no transactions are lost in this failover. With asynchronous replication, the secondary may not have received the last few updates from the primary. This problem can be mitigated if the primary and secondary are colocated and the secondary can be given access to the primary copy’s disk log. In that case, after the secondary is given control, it can read the disk log to pick up the last few log records that it did not receive before the primary failed.

Another application of primary-copy replication is to produce queryable copies of parts of a database. This is a functionally-rich feature that is offered by most relational database products. In this case, there is no real-time requirement to move the log records immediately to the secondary copy, so there is time to postprocess updates from the primary copy in various ways.

Some relational database systems capture updates to each primary table in a log table that is colocated with the primary table (see Figure 9.8b). One approach is to have the system define an SQL trigger on each primary table that translates each update into an insert on the log table. Periodically, the primary creates a new log table to capture updates and sends the previous log table to each secondary where it is applied to the replica. This approach to capturing updates can slow down normal processing of transactions, due to the extra work introduced by the trigger.

Figure 9.8. Generating Update Streams for Replicas. An update stream for replicas can be produced from (a) the database system’s log or (b) a log table produced by triggers.


Another approach is to postprocess the database log to create the stream of updates to the replicas. If this is done on-line while the log is still in main memory, then it avoids slowing down normal processing of transactions as compared to the trigger approach. In fact, the log can be sent to a different server, where the postprocessing is done.

Since the log can be quite large, one reason to postprocess the log is to reduce its size if possible. One technique is to filter out aborted transactions, since they do not need to be applied to replicas (see Figure 9.8a). This reduces the amount of data transmission and the cost of processing updates at the replica. However, it requires that the primary not send a log record until it knows that the transaction that wrote the record has committed. This introduces additional processing time at the primary and delay in updating the secondary, which are the main costs of reducing the data transmission. Another technique is to send only the finest granularity data that has changed, e.g., fields of records, rather than coarser-grain units of data, such as entire records.

Another reason to postprocess the log is to group together the updates of each transaction. This is beneficial because it enables transactions to be applied to secondaries serially. After each transaction is applied, the secondary database is in a consistent state. Therefore a query can read a consistent database state without using read locks. If updates were not grouped by transaction, then in order for queries to read a consistent state, updates would have to set write locks and queries would have to set read locks.

Some systems allow different parts of the database to be replicated to different locations. For example, the primary might contain a table describing customers and other tables describing orders. These tables are colocated at the primary, since many transactions require access to all of these tables. However, the customer table may be replicated at different servers than the order tables. To enable this, the log postprocessing splits each transaction’s updates into two transactions, one for the customer table and one for the order tables, and adds them to separate streams, one for the customer replicas and one for the order replicas. If there is also a replica that contains all the customer and orders information, then the log postprocessor would generate a third stream for that replica, with all the updates of each transaction packaged in a single transaction in the stream.

Given this complex filtering and transaction splitting, often a “buffer database” is used to store updates that flow from the primary to secondaries. Updates that are destined for different replicas are stored in different areas of the buffer database. This allows them to be applied to replicas according to different schedules.

Some systems allow application-specific logic to be used to apply changes to replicas. For example, the application could add a timestamp that tells exactly when the update was applied to the replica.

Although primary-copy replication normally does not allow transactions to update a secondary before updating the primary, there are situations where it can be made to work. For example, consider an update transaction that reads and writes a replica using two-phase locking. Suppose it keeps a copy of all the values that it read, which includes all the data items that the transaction wrote. When it is ready to commit, it sends the values of data items that it read along with values that it wrote to the primary. Executing within the context of the same transaction, the primary reads the same data items that the transaction read at the secondary, setting locks as in normal two-phase locking. If the values of the data items that it reads at the primary are the same as those that the transaction read at the secondary, then the transaction applies its updates to the primary too, and commits. If not, then it aborts. This is essentially an application of the optimistic concurrency control technique described in Section 6.8.

Most database systems offer considerable flexibility in configuring replication. Subsets of tables can be independently replicated, possibly at different locations. For example, a central office’s Accounts table can be split by branch, and the accounts for each branch are replicated at the system at that branch. As the number of replicated tables grows, it can be rather daunting to keep track of which pieces of which tables are replicated at which systems. To simplify management tasks, systems offer tools for displaying, querying, and editing the configuration of replicas.

The replication services of most database systems work by constructing a log stream or log table of updates and sending it to secondary servers. This approach was introduced in Tandem’s (now HP’s) Non-Stop SQL and in Digital’s VAX Data Distributor in the 1980s. Similar approaches currently are offered by IBM, Informix (now IBM), Microsoft (SQL Server), MySQL, Oracle, and Sybase. Within this general approach, products vary in the specific features they offer: the granularity of data that can be replicated (a database, a table, a portion of a table); the flexibility of selecting primaries and secondaries (a server can be a primary server for some data and a secondary for others); how dynamically the configuration of primaries and secondaries can be changed; the options for filtering updates and splitting transactions; and facilities to simplify managing a large set of replicas.

Failures and Recoveries

This primary-copy approach works well as long as the primary and secondaries are alive. How do we handle failures? Let us work through the cases.

Secondary Recovery

If a secondary replica fails, the rest of the system continues to run as before. When the secondary recovers, it needs to catch up processing the stream of updates from the primary. This is not much different than the processing it would have done if it had not failed; it’s just processing the updates later. The main new problem is that it must determine which updates it processed before it failed, so it doesn’t incorrectly reapply non-idempotent updates. This is the same problem as log-based database recovery that we studied in Chapter 7.

If a secondary is down for too long, it may be more efficient to get a whole new copy of the database rather than processing an update stream. In this case, while the database is being copied from the primary to the recovering secondary, more updates are generated at the primary. So after the database has been copied, to finish up, the secondary needs to process that last stream of updates coming from the primary. This is similar to media recovery, as described in Chapter 7.

Primary Recovery with One Secondary

If the primary fails, recovery can be more challenging. One could simply disallow updates until the primary recovers. This is a satisfactory approach when the main goal of replication is better performance for queries. In fact, it may be hard to avoid this approach if complex filtering and partitioning of updates is supported. Since different secondaries receive a different subset of the changes that were applied to the primary, secondaries are often not complete copies of the primary. Therefore, it would be difficult to determine which secondaries should take over as primary for which parts of the primary’s data.

If a goal of replication is improved availability for updates, then it is usually not satisfactory to wait for the primary to recover, since the primary could be down for awhile. So if it is important to keep the system running, some secondary must take over as primary. This leads to two technical problems. First, all replicas must agree on the selection of the new primary, since the system cannot tolerate having two primaries—this would lead to total confusion and incorrect results. Second, the last few updates from the failed primary may not have reached all replicas. If a replica starts processing updates from the new primary before it has received all updates from the failed primary, it will end up in a different state than other replicas that did receive all the failed primary’s updates.

We first explore these problems in a simple case of two replicas, one primary and one secondary. Suppose the secondary detects that the primary has failed. This failure detection must be based on timeouts. For example, the secondary is no longer receiving log records from the primary. And when the secondary sends “are you there?” messages to the primary, the secondary receives no reply from the primary. However, these timeouts may be due to a communications failure between the primary and secondary, similar to the one shown in Figure 9.7, and the primary may still be operating.

To distinguish between a primary failure and a primary-secondary communications failure, an external agent is needed to decide which replica should be primary. A typical approach is to add a “watchdog” process, preferably on a different machine than the primary and secondary. The watchdog sends periodic “are you there?” messages to both the primary and secondary. There are four cases to consider (see Figure 9.9):

  1. If the watchdog can communicate with the primary and not with the secondary, then it tells the primary of this fact. If the primary can communicate with the secondary, then no action is needed. If not, then the primary creates another secondary, if possible.

  2. If the watchdog can communicate with both the primary and secondary, but they cannot communicate with each other, then they notify the watchdog of this fact. The watchdog then tells the secondary to fail, since it can no longer function as a replica. It also tells the primary to create another secondary, if possible.

  3. If the watchdog can communicate only with the secondary, then it tells the secondary that it believes the primary is down. If the secondary can communicate with the primary, then no action is needed. If not, then it can take over as primary. In this case, if the primary is still operational but is simply unable to communicate with the watchdog, then the primary must self-destruct. Otherwise, the old primary and the new primary (which was formerly the secondary) are both operating as primary. It may therefore be advisable that the watchdog send a message to tell the primary to self-destruct, in case the primary is able to receive messages from the watchdog but its replies are not getting through. In summary, if the secondary loses communications with the primary, then whichever replica can still communicate with the watchdog is now the primary.

  4. If neither replica can communicate with the watchdog or with each other, then neither replica can operate as the primary. This is called a total failure.

Figure 9.9. Failure Cases with a Watchdog. A watchdog process can help sort out failure cases between a primary and secondary.


Suppose that the primary did indeed fail and the secondary has been designated to be the new primary. Now we face the second problem: the new primary may not have received all the committed updates performed by the former primary before the former primary failed. One solution to this problem is to have the primary delay committing a transaction’s updates until it knows that the secondary received those updates. The primary could wait until the secondary has stored those updates in stable storage, or it could wait only until the secondary has received the updates in main memory. If the system that stores the secondary has battery backup, then the latter might be reliable enough. In either case, we’re back to synchronous replication, where the updates to the replica are included as part of the transaction. This extra round of commit-time messages between the primary and secondary is essentially a simple two-phase commit protocol. The performance degradation from these messages can be significant. The choice between performance (asynchronous replication) and reliability (synchronous replication) depends on the application and system configuration. Therefore, database products that offer database mirroring usually offer both options, so the user can choose on a case-by-case basis.

Primary Recovery with Multiple Secondaries

Now let’s look at the more general case where there are multiple secondaries and a secondary detects the failure of a primary. There are two ways this could happen. The primary might indeed be down. Or, much worse, there could be a communication failure that partitions the network into independent sets of functioning replicas. In the latter case the primary could still be operational, so the set of replicas that doesn’t include the primary must not promote one of the secondaries to be a primary. The same problem can arise even in the first case where the primary is down. In this case, we do want to promote one of the secondaries to become primary. But if there are two independent sets of replicas that are operating, each set might independently promote a secondary to be the primary, a situation that we want to avoid.

To solve this problem, we need a decision criterion by which at most one set of replicas has an operational primary. We need an algorithm by which replicas can reach this decision. And after the replicas have chosen a new primary, we need an algorithm by which they can recover to the latest state before accepting new transactions. The next three sections treat each of these problems in turn.

Majority and Quorum Consensus

One simple way to ensure that only one primary exists is to statically declare one replica to be the primary. If the network partitions, the partition that has the primary is the one that can process updates. This is a feasible approach, but it is useless if the goal is high availability. If the primary is down, each partition has to assume the worst, which is that the primary really is running but not communicating with this partition. Thus, neither partition promotes a secondary to become primary.

A more flexible algorithm for determining which partition can have the primary is called majority consensus: a set of replicas is allowed to have a primary if and only if the set includes a majority of the replicas (see Figure 9.10). Since a majority is more than half, only one set of replicas can have a majority. Moreover, each partition can independently figure out if it has a majority. These are the two critical properties of majorities that make the technique work.

Figure 9.10. Majority Consensus. Partition 1 has a majority of the replicas and therefore is allowed to process updates. Partition 2 may not process updates.


Majority consensus is a generalization of the watchdog technique we described for database mirroring. The watchdog adds a third process to the mix. Two communicating processes comprise a majority. Thus, whichever partition has at least two communicating processes is allowed to have the primary: either the existing primary and secondary if the watchdog is down; or the watchdog plus whichever replica(s) it can communicate with. By convention, if the watchdog can communicate with the primary and secondary but the latter cannot communicate with each other, then the secondary is told to fail.

Majority consensus does have one annoying problem: it does not work well when there is an even number of copies. In particular, it is useless when there are just two replicas, since the only majority of two is two; that is, it can operate only when both replicas are available. When there are four replicas, a majority needs at least three, so if the network splits into two groups of two copies, neither group can have a primary.

A fancier approach is the quorum consensus algorithm. It gives a weight to each replica and looks for a set of replicas with a majority of the weight, called a quorum (see Figure 9.11). For example, with two replicas, we could give a weight of two to the more reliable replica and a weight of one to the other. That way, the replica with a weight of two can be primary even if the other replica is unavailable. Giving a weight of two to the most reliable replica helps whenever there is an even number of replicas. If the network partitions into two groups with the same number of copies, the group with the replica of weight two still has a quorum.

Figure 9.11. Quorum Consensus. Partition 1 has a total weight of 4, which is more than half of the total weight of 7. It therefore constitutes a quorum and is allowed to process updates.


Reaching Consensus

During normal operation, the set of operational replicas must agree on which replicas are up and which are down or unreachable. If a replica loses communication with one or more other replicas, then the operational replicas need to reassess whether they still have a majority. (For the purpose of this discussion, we’ll assume majority consensus, not quorum consensus.) In fact, the nonoperational replicas that are up also need to do this when they reestablish communications with a replica, since this replica may be the one they need to reach a majority. After some group of replicas is established as having a majority, that group can choose a primary and ensure that all replicas in the group have the most up-to-date state.

To discuss the details, we need some terminology: The replica set is the set of all replicas, including those that are up and down; the current configuration is the set of operational replicas that are able to communicate with each other and comprise a majority.

An algorithm that enables a set of processes to reach a common decision is called a consensus algorithm. In this case, that common decision is agreement on the current configuration by a set of operational replicas. Given our problem context, we’ll call the participants replicas instead of processes. But the algorithm we describe works for general consensus, not just for deciding on the current configuration.

One problem with such consensus algorithms is that multiple replicas may be trying to drive a common decision at the same time. It’s important that different replicas don’t drive the replicas toward different decisions.

Another problem is that the system may be unstable, with replicas and communications links failing and recovering while replicas are trying to reach consensus. There’s not much hope in reaching consensus during such unstable periods. However, once the system stabilizes, we do want the algorithm to reach consensus quickly.

There are several variations of algorithms to reach consensus, but they all have a common theme, namely, that there’s a unique identifier associated with the consensus, that these identifiers are totally ordered, and that the highest unique identifier wins. We will call that identifier an epoch number. It identifies a period of time, called an epoch, during which a set of replicas have agreed on the current configuration, called an epoch set. An epoch number can be constructed by concatenating a counter value with the unique replica identifier of the replica that generated the epoch number. Each replica keeps track of the current epoch number e in stable storage.

During stable periods, the epoch set with largest epoch number is the current configuration. During unstable periods, the actual current configuration may differ from the current epoch set. The goal of the consensus algorithm is to reach agreement on a new epoch set with associated epoch number that accurately describes the current configuration.

Suppose a replica R is part of the current configuration, which has epoch number e1. If R detects that the current configuration is no longer valid (because R has detected a failure or recovery), R becomes the leader of a new execution of the consensus algorithm, which proceeds as follows:

  1. R generates a new epoch number e2 that is bigger than e1. For example, it increments the counter value part of e1 by one and concatenates it with R’s replica identifier.

  2. R sends an invitation message containing the value e2 to all replicas in the replica set.

  3. When a replica R′ receives the invitation, it replies to R with an accept message if R′ has not accepted another invitation with an epoch number bigger than e2. R′ includes its current epoch number in the accept message. Moreover, if R′ was the leader of another instance of the consensus algorithm (which is using a smaller epoch number), it stops that execution. Otherwise, if R′ has accepted an invitation with an epoch number bigger than e2, it sends a reject message to R. As a courtesy, it may return the largest epoch number of any invitation it has previously accepted.

  4. R waits for its timeout period to expire, to ensure it receives as many replies as possible.

    1. If R receives accept messages from at least one less than a majority of replicas in the replica set, then it has established a majority (including itself) and therefore has reached consensus. It therefore sends a new epoch message to all the accepting replicas and stops. The new epoch message contains the new epoch number and epoch set. When a replica receives a new epoch message, it updates its epoch number and the associated list of replicas in the epoch set and writes it to stable storage.

    2. Otherwise, R has failed to reach a majority and stops.

Let’s consider the execution of this algorithm under several scenarios. First, assume that only one leader R is running this algorithm. Then it will either receive enough accept messages to establish a majority and hence a new epoch set, or it will fail to reach a majority.

Suppose a leader R1 fails to establish an epoch set. One reason this could happen is that R1 may be unable to communicate with enough replicas to establish a majority. In this case, R1 periodically could attempt to re-execute the algorithm, in case a replica or communication link has silently recovered and thereby made it possible for R1 to form a majority.

A second reason that R1 may fail to establish an epoch set is that another replica R2 is concurrently trying to create a new epoch set using a higher epoch number. In this case, it is important that R1 not rerun the algorithm right away with a larger epoch number, since this might kill R2’s chance of getting a majority of acceptances. That is, it might turn into an “arms race,” where each replica reruns the algorithm with successively higher epoch numbers and thereby causes the other replica’s consensus algorithm to fail.

The arms race problem notwithstanding, if R1 fails to establish an epoch set and, after waiting awhile, receives no other invitations to join an epoch set with higher epoch number, then it may choose to start another round of the consensus algorithm. In the previous round, if it received a reject message with a higher epoch number e3, then it can increase its chances of reaching consensus by using an epoch number even higher than e3. This ensures that any replica that is still waiting for the result of the execution with epoch number e3 will abandon waiting and choose the new, higher epoch number instead.

Establishing the Latest State

After the current configuration has been established as the epoch set, the primary needs to be selected and all the replicas in the current configuration have to be brought up to date. The first step is to determine if the new epoch set includes the primary from the previous epoch. To do this, first observe that since every epoch set has a majority, it overlaps every earlier epoch set. Therefore, there is at least one replica in the new epoch set from the previous epoch set and it has the largest epoch number less than the new epoch number. If one of the replicas with the largest previous epoch number was the primary of that epoch set, then we can simplify recovery by reusing it as the primary of the new epoch set.

Unfortunately, this may not be right, because the last epoch may not have stabilized before it lost its majority and had to reconfigure. If that happened, then the primary of the previous epoch set may not have the latest state. That is, the previous epoch set may have elected the primary but not yet refreshed the new primary’s state to be the latest state known to all replicas in the epoch set. To avoid this outcome, the new epoch set needs to identify the last stable epoch set. This can be done by having each epoch use a state bit that it sets after it has stabilized and ensured that every replica in the replica set has the latest state. Only then can the epoch set accept new work.

Therefore, the new epoch set should determine if it includes the primary of the last stable epoch set. If so, then it knows that this primary has the most up-to-date state. So to resume normal processing, the primary needs to ensure the secondaries are up to date by determining the state of each secondary and sending it whatever updates it is missing. It then sets the epoch’s state bit to stable and broadcasts that to all secondaries.

If the epoch set does not include the primary from the previous epoch, then a new primary must be selected. The choice of primary may be based on the amount of spare capacity on its machine (since a primary consumes more resources than a secondary) and on whether it was a member of the most recent epoch set and thus has the latest or a very recent state (so that it can recover quickly and start accepting new requests).

The latest state of the secondaries that are still alive can be determined by comparing the sequence numbers of the last message received by each secondary from the previous primary. The one with highest sequence number has the latest state and can forward the tail of its update sequence to other secondaries that need it. After a replica receives that state, it acknowledges that fact to the primary. After the new primary receives acknowledgments from all replicas in the epoch set, it can set the epoch’s state to stable and start processing new transactions. The new primary should then start off with a message sequence number greater than that of the largest received by any secondary in the previous epoch.

Does the new epoch set actually have the latest state? To answer this question, let C be the set of replicas that were in both the previous epoch and the new one. Since each epoch set has a majority of the replicas, C must include at least one replica. The replicas in C are the only ones that might know the latest state. However, as in the case of database mirroring, it’s possible that none of them actually do know the latest state, due to the delay in propagating updates from the primary to the replicas. For example, suppose the epoch set for epoch 1 had replicas P, S1, and S2, with P as the primary. Suppose the last transaction was committed by P and S1, but not S2. Then they all died, and epoch set 2 was formed, consisting of replicas S2, S3, and S4. Epoch sets 1 and 2 overlap by one replica, S2, but S2 doesn’t have the latest state.

We encountered this problem when considering secondary recovery for database mirroring. The solution we offered was to propagate updates synchronously. In that case, two-phase commit is needed. This ensures that every replica in C has all the committed updates. However, the last few updates might be in their uncertainty periods and hence blocked. Thus, while synchronous replication reduces the number of transactions that might be lost when a secondary takes over as primary, it doesn’t close the gap entirely.

Consistency, Availability, and Partition-Tolerance

In distributed systems, there is an inherent tradeoff between data consistency, system availability, and tolerance to network partitions. A system can offer any two of these three properties, but not all three of them. This is known as the CAP conjecture.

The primary-copy approach with synchronous replication ensures data consistency and partition-tolerance, and therefore gives up on availability in some cases. It attains data consistency by writing updates to replicas as part of the transaction that performed the write and using two-phase commit for transaction atomicity. It attains partition-tolerance by using quorum consensus to ensure that there are not two partitions that are both able to run transactions. This leads to a loss of availability in the case where the network partitions, because some operational replicas are not part of the quorum. Therefore, even though they are up and running, they are not available.

Suppose the network partitions and the partition that has a quorum of replicas does not include the former primary. Although the system can ensure the updates are permitted only on the quorum of copies, it cannot guarantee consistency because the last few transactions that executed at the former primary may not have arrived at any of the replicas in the quorum before the network partition occurred. Thus, a decision to allow updates to the quorum of replicas is trading off consistency for availability. A decision to disallow updates to the quorum of replicas is making the opposite tradeoff, namely, trading off availability in order to ensure consistency.

Another aspect of this tradeoff is eventual consistency versus instantaneous consistency. Asynchronous replication ensures eventual consistency but gives up on instantaneous consistency, since there may be a long delay before updates are propagated to some replicas. The weaker level of consistency improves performance by avoiding two-phase commit. It may also improve availability, by allowing a user to be redirected from one replica to another in a slightly different state.

For example, suppose an on-line shopper has started populating her shopping basket and the shopping basket is replicated using primary-copy replication. Suppose the primary fails and is not present in the quorum. To maximize availability, the system could service read requests using another replica while the replicas are being brought up to date. Thus, during this period, the shopper might be given an older state of his or her shopping cart. This may occur even if the last update to the cart is known to the quorum, because the shopper’s reads are being serviced by a slightly out-of-date replica. This may be confusing, especially since the shopping cart will return to the latest state after the replicas are brought up to date. However, if the probability of this occurrence is sufficiently low, this loss of consistency may be regarded as a better tradeoff than having the shopping cart be unavailable while the replicas are being brought up to date.

A different set of tradeoffs between consistency, availability, and partition-tolerance is offered by multimaster replication. We will consider these tradeoffs at the end of the next section.


    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多