分享

Essential Facts about Index Covering in DB2 U...

 hx99 2007-08-06

Essential Facts about Index Covering in DB2 Universal Database

developerWorks
Document options
Set printer orientation to landscape mode

Print this page

Email this page

E-mail this page

Document options requiring JavaScript are not displayed


Rate this page

Help us improve this content


Level: Introductory

Alexander KuznetsovChicago, IL

13 Mar 2003

Author Alexander Kuznetsov uses examples to illustrate cases in which adding a column or two to an index can boost a query‘s performance by "covering" the columns used in a SELECT statement. The DB2 optimizer is very good in recognizing such situations and can choose a very efficient execution plan.

This article is written for IBM® DB2® Universal Database® for Linux, UNIX®, and Windows®.

Introduction

Sometimes adding a column or two to an index can boost a query‘s performance ten times or more. I am going to write about those situations in which all the columns that are necessary to execute a SELECT query are found in one or more indexes; in other words, the index (or indexes) covers the query. In such cases the execution plan involving only indexes may be the most efficient one.

The DB2 optimizer is very good in recognizing such situations and can choose a very efficient execution plan involving index covering. I‘ll go through several examples in which adding one or more columns to an index results in much faster queries execution, because:

  • Indexes are usually smaller than tables, so index scans usually require less I/O.
  • Index entries are already sorted, so selects with appropriate ORDER BY, GROUP BY, or DISTINCT clauses run without sorting result sets.
  • Index entries are already sorted, so merge joins do not need any sorting.

Also I will discuss how to have one and the same index cover several queries, which can be useful in keeping the number of indexes as low as possible.



Back to top


The schema used in this article

Here are the tables I will use in my examples:

                        CREATE TABLE CUSTOMER_DATA(
                        ID INT NOT NULL,
                        FIRST_NAME VARCHAR(30),
                        LAST_NAME VARCHAR(30),
                        SALUTATION VARCHAR(10),
                        ADDRESS VARCHAR(30),
                        CITY VARCHAR(30),
                        STATE CHAR(2),
                        ZIP VARCHAR(10),
                        PHONE_AREA SMALLINT,
                        PHONE_NUMBER INTEGER,
                        TOTAL_AMOUNT FLOAT,
                        SOME_OTHER_DATA VARCHAR(100));
                        CREATE TABLE CUSTOMER_ORDER(
                        CUSTOMER_ID INT NOT NULL,
                        ORDER_NUMBER SMALLINT NOT NULL,
                        ORDER_DT DATE,
                        AMOUNT FLOAT,
                        SOME_OTHER_DATA VARCHAR(100))
                        ALTER TABLE CUSTOMER_DATA ADD PRIMARY KEY(ID)
                        ALTER TABLE CUSTOMER_ORDER ADD PRIMARY KEY(CUSTOMER_ID, ORDER_NUMBER)
                        ALTER TABLE CUSTOMER_ORDER ADD FOREIGN KEY(CUSTOMER_ID)
                        REFERENCES CUSTOMER_DATA(ID)
                        CREATE INDEX CUSTOMER_NAMES ON CUSTOMER_DATA(LAST_NAME, FIRST_NAME);
                        

(Note that DB2 automatically creates indexes to implement primary key constraints.)

We‘ll also assume that the statistics on both tables are current, detailed and created with distribution. The default optimization level, 5, is used for all the examples in this article.



Back to top


Advantage: Speeding up a frequently run query

Consider a very simple query that runs frequently:

                        SELECT SUM(AMOUNT) FROM CUSTOMER_ORDER WHERE CUSTOMER_ID = 1000
                        

The optimizer chooses to access rows in the table via an index on CUSTOMER_ID.

Let‘s save the query in a file named sum_amt.sql and measure the real execution costs of this query using db2batch, as follows:

                        db2batch -d test2 -f sum_amt.sql -r perf_pk.txt -o p 3
                        

(The full syntax of the db2batch command is described in reference [3]). After running the command, the file perf_pk.txt contains a report on costs of executing this query:

                        ...
                        Buffer pool data logical reads               = 12
                        Buffer pool data physical reads              = 10
                        ...
                        Buffer pool index logical reads              = 3
                        Buffer pool index physical reads             = 1
                        

In this particular case the index depth was 3. The root index page and the relevant non-leaf level page were already in the bufferpool, so only the leaf level index page required a physical read (index physical reads = 1). (Note that index entries on a leaf level can span two leaf pages or more.) Only two of the relevant data pages were in the bufferpool, so the rest (10) had to be read from the disk. It is quite typical for OLTP activity that most of index pages are already in bufferpools. Also in this case the table was not clustered on the index, so those 12 records matching the search criteria were distributed across the table as shown in Figure 1.


Figure 1. Table access is used to satisfy the query
Table access is used to satisfy the query

This query could have better performance if the table had been clustered on the index on CUSTOMER_ID. For more information on clustered indexes, refer to reference [2].

Now let‘s create an index to speed up the query:

                        CREATE INDEX ORDER_CUST_AMT ON CUSTOMER_ORDER(CUSTOMER_ID, AMOUNT);
                        RUNSTATS ON TABLE DB2INST1.CUSTOMER_ORDER WITH DISTRIBUTION AND DETAILED INDEXES ALL;
                        

(For more information on the syntax of RUNSTATS refer to reference [3]). Note that we must create statistics explicitly, because they aren‘t created automatically when you create the index. This new index contains all the necessary columns to satisfy the query. The optimizer has chosen to run this query accessing only this index, not touching the table at all:


Figure 2. Index-only access is used to satisfy the query
Index-only access is used to satisfy the query

As shown in Figure 2, in this particular case the index depth did not increase after adding the second column. Also in this case index entries matching search criteria fit into one leaf page. Note that index depth could increase, because the size of index entries increased. Also note that index entries matching the search criteria could span two data pages.

Now let‘s look at the execution plan generated by dynexpln utility and real execution costs:

                        dynexpln -d test2 -o output.txt -q "SELECT SUM(AMOUNT) FROM
                        CUSTOMER_ORDER WHERE CUSTOMER_ID = 1000"
                        ...
                        |  |  Index-Only Access
                        (...)
                        Buffer pool data logical reads               = 0
                        Buffer pool data physical reads              = 0
                        (...)
                        Buffer pool index logical reads              = 3
                        Buffer pool index physical reads             = 1
                        

(For more information on dynexpln, refer to references [2] and [1].)

The index-only access shows that the cost of executing the query is much lower. Should index depth increase and the entries span two pages at the same time, the query would require five reads, still much better than when accessing the table.

Although this seems like a simplistic example, it illustrates the advantages of index covering. As we have seen, index covering minimized the amount of I/O needed to run a query, significantly improving its performance. Also, remember that in this particular case:

  • On average, at least several records match the search criteria.
  • The table is not clustered on the index on CUSTOMER_ID, so these records are usually distributed across the table.
  • Most columns are not necessary to satisfy the query.

Under different circumstances, the advantages of index covering would be less pronounced, if any.

We have just discussed an example in which a very small number of records were retrieved and these records were distributed across the table. Another very similar example is:

                        SELECT FIRST_NAME, LAST_NAME, PHONE_AREA, PHONE_NUMBER
                        FROM CUSTOMER_DATA ORDER BY LAST_NAME FETCH FIRST 30 ROWS ONLY;
                        

Again, only a tiny number of records is retrieved (at most 30). If there is an index on FIRST_NAME, LAST_NAME, PHONE_AREA, PHONE_NUMBER, it will cover the query. Again, if the cluster ratio of this index is low, this query‘s performance will improve significantly, and for the same reasons as shown in the first example.



Back to top


Advantage: Scanning an index rather than a table

If a table is big, scanning it to satisfy a query may be costly. Whenever an index is smaller than its base table (and it usually is), scanning an index might be a cheaper alternative.

Consider a simple query:

                        SELECT LAST_NAME, FIRST_NAME FROM CUSTOMER_DATA WHERE LAST_NAME LIKE ‘%STONE‘"
                        

Let us assume there is an index on LAST_NAME, but the index on LAST_NAME,FIRST_NAME is dropped. Note that the predicate LAST_NAME LIKE ‘%STONE‘ is not indexable. The optimizer has chosen a tablespace scan, as we can see by looking at the output from dynexpln.

                        dynexpln -d test2 -o likestone1.txt -q "SELECT LAST_NAME,
                        FIRST_NAME FROM CUSTOMER_DATA WHERE LAST_NAME LIKE ‘%STONE‘"
                        SQL Statement:
                        SELECT LAST_NAME, FIRST_NAME
                        FROM CUSTOMER_DATA
                        WHERE LAST_NAME LIKE ‘%STONE‘
                        Estimated Cost        = 10381
                        Estimated Cardinality = 1383
                        Access Table Name = DB2INST1.CUSTOMER_DATA  ID = 2,5
                        |  #Columns = 1
                        |  Relation Scan
                        |  |  Prefetch: Eligible
                        |  Lock Intents
                        |  |  Table: Intent Share
                        |  |  Row  : Next Key Share
                        |  Sargable Predicate(s)
                        |  |  #Predicates = 1
                        |  Return Data to Application
                        |  |  #Columns = 2
                        Return Data Completion
                        

When the index on LAST_NAME,FIRST_NAME is created, the optimizer chooses to use it:

                        db2 "CREATE INDEX DC_LN ON CUSTOMER_DATA(LAST_NAME, FIRST_NAME)"
                        db2 RUNSTATS ON TABLE DB2INST1.CUSTOMER_DATA WITH
                        DISTRIBUTION AND DETAILED INDEXES ALL
                        dynexpln -d test2 -o likestone2.txt -q "SELECT LAST_NAME,
                        FIRST_NAME FROM CUSTOMER_DATA WHERE LAST_NAME LIKE ‘%STONE‘"
                        SQL Statement:
                        SELECT LAST_NAME, FIRST_NAME
                        FROM CUSTOMER_DATA
                        WHERE LAST_NAME LIKE ‘%STONE‘
                        Estimated Cost        = 707
                        Estimated Cardinality = 1383
                        Access Table Name = DB2INST1.CUSTOMER_DATA  ID = 2,5
                        |  #Columns = 2
                        |  Index Scan:  Name = DB2INST1.DC_LN  ID = 2
                        |  |  Index Columns:
                        |  |  |  1: LAST_NAME (Ascending)
                        |  |  |  2: FIRST_NAME (Ascending)
                        |  |  #Key Columns = 0
                        |  |  |  Start Key: Beginning of Index
                        |  |  |  Stop Key: End of Index
                        |  |  Index-Only Access
                        |  |  Index Prefetch: Eligible 84
                        |  |  Sargable Index Predicate(s)
                        |  |  |  #Predicates = 1
                        |  |  |  Return Data to Application
                        |  |  |  |  #Columns = 2
                        |  Lock Intents
                        |  |  Table: Intent Share
                        |  |  Row  : Next Key Share
                        Return Data Completion
                        

As we have seen, estimated cost of an index scan is much lower than the cost of a tablespace scan, simply because the index is much smaller. If the index is frequently used, then index pages are quite likely to be found in the bufferpool. In that case, the real execution cost advantage of index covering is even better.



Back to top


Advantage: Eliminating the need to sort result sets

Whenever a query contains an ORDER BY, GROUP BY or DISTINCT clause, its execution may require sorting. Sorting may consume significant resouces. Index entries are already sorted, so index covering may eliminate the need to sort a result set, improving the query‘s performance. For example, consider a query with an ORDER BY clause:

                        SELECT LAST_NAME, FIRST_NAME, PHONE_AREA, PHONE_NUMBER
                        FROM CUSTOMER_DATA
                        ORDER BY LAST_NAME, FIRST_NAME
                        

Although there is an index on LAST_NAME, FIRST_NAME, the table is not clustered on it. In this case, sometimes it is more efficient to perform a full tablespace scan and sort the result set than to access the table via the index. (There is a brief discussion of a similar problem in my previous article, When We Think the Optimizer Doesn‘t Get It Right). This is what the optimizer has chosen in this particular case:

                        SQL Statement:
                        SELECT LAST_NAME, FIRST_NAME, PHONE_AREA, PHONE_NUMBER
                        FROM CUSTOMER_DATA
                        ORDER BY LAST_NAME, FIRST_NAME
                        Estimated Cost        = 25736
                        Estimated Cardinality = 59616
                        Access Table Name = DB2INST1.CUSTOMER_DATA  ID = 2,5
                        |  #Columns = 4
                        |  Relation Scan
                        |  |  Prefetch: Eligible
                        |  Lock Intents
                        |  |  Table: Intent Share
                        |  |  Row  : Next Key Share
                        |  Insert Into Sorted Temp Table  ID = t1
                        |  |  #Columns = 4
                        |  |  #Sort Key Columns = 2
                        |  |  |  Key 1: LAST_NAME (Ascending)
                        |  |  |  Key 2: FIRST_NAME (Ascending)
                        |  |  Sortheap Allocation Parameters:
                        |  |  |  #Rows     = 59616
                        |  |  |  Row Width = 44
                        |  |  Piped
                        Sorted Temp Table Completion  ID = t1
                        Access Temp Table  ID = t1
                        |  #Columns = 4
                        |  Relation Scan
                        |  |  Prefetch: Eligible
                        |  Return Data to Application
                        |  |  #Columns = 4
                        Return Data Completion
                        

For this query, sorting the result set uses up most of the resources, because both estimated and real execution costs for the same query without ORDER BY clause are at least 50% lower. Having an index cover the query, the select runs almost instantly, because:

  • It needs to read fewer pages (the index is smaller than its table)
  • It does not have to sort the results

Let‘s create an index on LAST_NAME, FIRST_NAME, PHONE_AREA, PHONE_NUMBER and create statistics. Now the execution plan involves index covering:

                        SQL Statement:
                        SELECT LAST_NAME, FIRST_NAME, PHONE_AREA, PHONE_NUMBER
                        FROM CUSTOMER_DATA
                        ORDER BY LAST_NAME, FIRST_NAME
                        Estimated Cost        = 685
                        Estimated Cardinality = 59616
                        Access Table Name = DB2INST1.CUSTOMER_DATA  ID = 2,5
                        |  #Columns = 4
                        |  Index Scan:  Name = DB2INST1.CUST_DIRECTORY  ID = 4
                        |  |  Index Columns:
                        |  |  |  1: LAST_NAME (Ascending)
                        |  |  |  2: FIRST_NAME (Ascending)
                        |  |  |  3: PHONE_AREA (Ascending)
                        |  |  |  4: PHONE_NUMBER (Ascending)
                        |  |  #Key Columns = 0
                        |  |  |  Start Key: Beginning of Index
                        |  |  |  Stop Key: End of Index
                        |  |  Index-Only Access
                        |  |  Index Prefetch: Eligible 79
                        |  |  |  Return Data to Application
                        |  |  |  |  #Columns = 4
                        |  Lock Intents
                        |  |  Table: Intent Share
                        |  |  Row  : Next Key Share
                        Return Data Completion
                        

Similarly, index covering improves performance of queries with GROUP BY and DISTINCT clauses. For example, an index on LAST_NAME, FIRST_NAME covers these two queries, eliminating any need for sorting during their execution and thereby improving their performance:

                        SELECT LAST_NAME, FIRST_NAME, COUNT(*) FROM CUSTOMER_DATA
                        GROUP BY LAST_NAME, FIRST_NAME;
                        SELECT DISTINCT LAST_NAME, FIRST_NAME FROM CUSTOMER_DATA;
                        

Note: Let‘s suppose all columns from a unique index are included in the select list. In this case any result set is already distinct. The DB2 optimizer is able to recognize such situations and eliminate unnecessary sorting even if there is no index covering.

As we have seen, index covering can significantly improve performance of select queries with ORDER BY, GROUP BY or DISTINCT clauses.



Back to top


Advantage: Speeding up joins between parent and child tables

The DB2 optimizer is smart enough to choose index covering even when more than one table is involved in a query. Consider this simple query:

                        SELECT CUSTOMER_DATA.FIRST_NAME, CUSTOMER_DATA.LAST_NAME,
                        CUSTOMER_DATA.PHONE_AREA, CUSTOMER_DATA.PHONE_NUMBER,
                        CUSTOMER_ORDER.ORDER_NUMBER, CUSTOMER_ORDER.ORDER_DT,
                        CUSTOMER_ORDER.AMOUNT
                        FROM CUSTOMER_DATA JOIN CUSTOMER_ORDER ON ID = CUSTOMER_ID
                        

If the only indexes that exist are those used for implementing primary and foreign key constraints, the estimated cost of execution is:

                        Estimated Cost        = 40421
                        Estimated Cardinality = 357696
                        

Let us add appropriate indexes (Note that we might want to have these indexes implement primary and foreign key constraints, as we are going to discuss later).

                        CREATE INDEX DATA_IND ON CUSTOMER_DATA(ID, FIRST_NAME,
                        LAST_NAME, PHONE_AREA, PHONE_NUMBER);
                        CREATE INDEX ORDER_IND ON CUSTOMER_ORDER(CUSTOMER_ID,
                        ORDER_NUMBER, ORDER_DT, AMOUNT);
                        

Now the query runs much faster.

                        Estimated Cost        = 15404
                        Estimated Cardinality = 357696
                        Access Table Name = DB2INST1.CUSTOMER_DATA  ID = 2,5
                        |  #Columns = 5
                        |  Index Scan:  Name = DB2INST1.DATA_IND  ID = 3
                        |  |  Index Columns:
                        |  |  |  1: ID (Ascending)
                        |  |  |  2: FIRST_NAME (Ascending)
                        |  |  |  3: LAST_NAME (Ascending)
                        |  |  |  4: PHONE_AREA (Ascending)
                        |  |  |  5: PHONE_NUMBER (Ascending)
                        |  |  #Key Columns = 0
                        |  |  |  Start Key: Beginning of Index
                        |  |  |  Stop Key: End of Index
                        |  |  Index-Only Access
                        (snip)
                        Merge Join
                        |  Access Table Name = DB2INST1.CUSTOMER_ORDER  ID = 2,6
                        |  |  #Columns = 3
                        |  |  Index Scan:  Name = DB2INST1.ORDER_IND  ID = 1
                        |  |  |  Index Columns:
                        |  |  |  |  1: CUSTOMER_ID (Ascending)
                        |  |  |  |  2: ORDER_NUMBER (Ascending)
                        |  |  |  |  3: ORDER_DT (Ascending)
                        |  |  |  |  4: AMOUNT (Ascending)
                        |  |  |  #Key Columns = 0
                        |  |  |  |  Start Key: Beginning of Index
                        |  |  |  |  Stop Key: End of Index
                        |  |  |  Index-Only Access
                        

In this case, neither table was clustered on CUSTOMER_ID. Should both parent and child tables be clustered on CUSTOMER_ID, the advantages of index covering would be less pronounced, maybe even insignificant.

The cost advantage of index covering for this query (15404 vs. 40421) is quite impressive. As we have seen, it makes a lot of sence to use index covering to speed up joins.



Back to top


Using one index to cover several queries

Suppose that we need to improve performance of several frequently run queries, such as:

                        SELECT CUSTOMER_ID, SUM(AMOUNT) FROM CUSTOMER_ORDER GROUP
                        BY CUSTOMER_ID;
                        SELECT CUSTOMER_ID, MIN(ORDER_DT) FROM CUSTOMER_ORDER GROUP
                        BY CUSTOMER_ID;
                        SELECT SUM(AMOUNT) FROM CUSTOMER_ORDER;
                        

Of course, we could create an index (or a materialized query table) for every query we need to optimize:

                        CREATE INDEX CUST_ID_AMT ON CUSTOMER_ORDER(CUSTOMER_ID,
                        AMOUNT) ALLOW REVERSE SCANS;
                        CREATE INDEX CUST_ID_ORDER_DT ON CUSTOMER_ORDER
                        (CUSTOMER_ID, ORDER_DT) ALLOW REVERSE SCANS;
                        CREATE INDEX CUST_AMOUNT ON CUSTOMER_ORDER(AMOUNT) ALLOW
                        REVERSE SCANS;
                        

However, additional indexes slow down all modifications against their base tables and use up disk space. In many cases we need to find some compromise between speeding up select queries and slowing down inserts, updates and deletes. In such cases it may be very useful to have one index cover several queries. For example, here is the index that covers all the listed above queries:

                        CREATE INDEX ORDER_COVERING ON CUSTOMER_ORDER(CUSTOMER_ID,
                        ORDER_DT, AMOUNT) ALLOW REVERSE SCANS;
                        RUNSTATS ON TABLE DB2INST1.CUSTOMER_ORDER WITH DISTRIBUTION
                        AND DETAILED INDEXES ALL;
                        



Back to top


Using one index to cover a query and to implement a constraint

Here is yet another technique used to keep number of indexes low. In this example, I will use an index both to implement a primary key constraint and to cover all the queries I use in this article:

                        ALTER TABLE CUSTOMER_ORDER DROP PRIMARY KEY;
                        CREATE UNIQUE INDEX ORDER_PK
                        ON CUSTOMER_ORDER(CUSTOMER_ID, ORDER_NUMBER)
                        INCLUDE(ORDER_DT, AMOUNT) ALLOW REVERSE SCANS;
                        ALTER TABLE CUSTOMER_ORDER ADD PRIMARY KEY(CUSTOMER_ID, ORDER_NUMBER);
                        RUNSTATS ON TABLE DB2INST1.CUSTOMER_ORDER WITH DISTRIBUTION AND DETAILED INDEXES ALL;
                        

In this example, DB2 will issue a warning and reuse an already existing index ORDER_PK to implement the primary key constraint.

The INCLUDE clause in the CREATE INDEX statement means that the index both guarantees uniqueness of pairs (CUSTOMER_ID, ORDER_NUMBER) and also contains columns ORDER_DT and AMOUNT. For more details on the syntax of CREATE INDEX, see reference [4].

However, use this technique in moderation. In many cases it is better to have a smaller index to implement a constraint.



Back to top


Using more than one index to cover one query

Now let‘s look at how the optimizer chooses to use two indexes to cover one query. Let us suppose the index on FIRST_NAME, LAST_NAME does not exist (we were using this index in several chapters, but not in this one).

                        DROP INDEX CUSTOMER_NAMES
                        

Here are the two indexes and the query:

                        CREATE INDEX DATA_FIRST_NAME ON CUSTOMER_DATA(FIRST_NAME)
                        CREATE INDEX DATA_LAST_NAME ON CUSTOMER_DATA(LAST_NAME)
                        SELECT FIRST_NAME, LAST_NAME FROM CUSTOMER_DATA WHERE
                        ((FIRST_NAME LIKE ‘RO%‘) AND (LAST_NAME LIKE ‘TRA%‘))
                        

(Should there be an index on FIRST_NAME, LAST_NAME, it would most likely be chosen to satisfy the query. We are considering the situation when such a convenient index does not exist.) In this particular case, the optimizer has chosen to access these two indexes, not the table itself:

                        |  |  Access Table Name = DB2INST1.CUSTOMER_DATA  ID = 2,5
                        |  |  |  #Columns = 1
                        |  |  |  Index Scan:  Name = DB2INST1.DATA_LAST_NAME  ID = 2
                        |  |  |  |  Index Columns:
                        |  |  |  |  |  1: LAST_NAME (Ascending)
                        |  |  |  |  #Key Columns = 1
                        |  |  |  |  |  Start Key: Inclusive Value
                        |  |  |  |  |  |  1: ‘TRA                 ...‘
                        |  |  |  |  |  Stop Key: Inclusive Value
                        |  |  |  |  |  |  1: ‘TRAZZZZZZZZZZZZZZZZZ...‘
                        |  |  |  | Index-Only 
                        |  |  |  |  Index Prefetch: None
                        (snip)
                        |  Index ANDing Bitmap Probe
                        |  |  Access Table Name = DB2INST1.CUSTOMER_DATA  ID = 2,5
                        |  |  |  #Columns = 1
                        |  |  |  Index Scan:  Name = DB2INST1.DATA_FIRST_NAME  ID = 1
                        |  |  |  |  Index Columns:
                        |  |  |  |  |  1: FIRST_NAME (Ascending)
                        |  |  |  |  #Key Columns = 1
                        |  |  |  |  |  Start Key: Inclusive Value
                        |  |  |  |  |  |  1: ‘RO                  ...‘
                        |  |  |  |  |  Stop Key: Inclusive Value
                        |  |  |  |  |  |  1: ‘ROZZZZZZZZZZZZZZZZZZ...‘
                        |  |  |  |  Index-Only Access
                        

As we have seen, in this particular case we benefited from index covering without having to create another index.This time DB2 optimizer has chosen to use two existing indexes. So keep that in mind when deciding whether to create yet another index.



Back to top


Summary

As we have seen, there are various scenarios when index covering may dramatically speed up select queries. Because adding columns to indexes can slow down the time it makes to make changes to tables, use this technique only after careful analysis of your particular situation

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多