分享

Research Designs

 赛波 2009-09-22

Research Designs

INTRODUCTION

Scientific research involves the systematic investigation of events of interest so as to increase our understanding of those events. As applied to cognitive behavior therapy and behavior modification with children and adolescents, scientific research often consists of inquiry that improves our knowledge of childhood psychopathology and the variables that impact the development, maintenance, and treatment of disordered behavior. To conduct research in a rigorous way, investigators use specific methods of selecting participants, arranging experimental conditions, and gathering and analyzing data. These methods are referred to as research designs. Various experimental, quasi-experimental, and correlational designs are used in research on child behavior modification and cognitive behavior therapy. Selection of an appropriate design depends on several variables, including the goals of research, the topic under investigation, characteristics of participants such as age, ethical and practical considerations, and a host of other factors. Prior to discussing some of the specific designs that are most commonly used when conducting research on cognitive behavior therapy and behavior modification with children and adolescents, information about the goals and categories of research is presented.

GOALS OF RESEARCH

Scientific inquiry can occur for many different reasons. The design that one selects depends on the goal of conducting the research. In this section, the goals of research in the context of conducting investigations on cognitive behavior therapy and/or behavior modification with children are discussed. Psychological research focused on other issues will likely have other goals, which are beyond the scope of this entry.

One common goal of psychological research on cognitive behavior therapy and behavior modification is to identify treatments that work for specific disordered behavior and emotion patterns. In other words, it is important to document that a particular treatment is effective in ameliorating a specific problematic response, as compared to no treatment. As a related goal, investigators are often interested in determining the number/percentage of youngsters with a given problem who respond positively to a specific intervention. Accomplishing this goal is important in improving the understanding of the likelihood of treatment effectiveness for any given patient, which has important clinical implications. For example, if researchers determine that only 25% of a given population of children with a particular disorder respond positively to an intervention, then clinicians may not wish to devote a large amount of time to learning and then implementing the approach, particularly if there are other interventions that have been proven to be effective for a larger percentage of the population.

Third, researchers are often interested in determining which intervention is most effective with a given clinical population. To accomplish this goal, researchers can compare the impact of two or more types of treatment for a clinical problem and determine which produces the best outcomes. For example, investigators may wish to compare the use of exposure-based treatment, medication, and a combination of exposure-based treatment and medication to treat social phobia in adolescents. Fourth, investigation of the variables that influence the effectiveness of treatment is important. Evaluating the variables that impact treatment—such as age of the youngster, comorbid problems, or family involvement, to name a few—helps clinicians determine whether optimal conditions exist to use a particular intervention with a client.Fifth, researchers might be interested in investigating the mechanisms that produce change as part of a treatment. Because treatments for psychological conditions typically include multiple components, investigators may wish to determine which treatment components are the most essential to success in order to streamline the intervention. Doing so can increase the cost effectiveness of intervention. Studies attempting to accomplish this goal are often referred to as dismantling studies, whereby individual components of a treatment package are isolated and their relative influence on the clinical problems is determined.

CATEGORIES OF RESEARCH DESIGNS

Efficacy Versus Effectiveness Studies

Treatment efficacy research refers to investigations conducted on intervention strategies using a high degree of control over all aspects of the study. In this type of research, the investigator carefully selects participants who fit a clear, narrowly defined set of criteria and implements the intervention in a prescribed, regimented way in a very controlled setting. In efficacy studies, participants are screened for comorbid conditions, and only those with the particular condition of interest are to be included in the study. Researchers then implement a standardized protocol for each participant, with essentially no variation to account for individual differences. Research involving children with separation anxiety disorder who have no comorbid conditions utilizing a standardized, manualized 8-week treatment program would be an example of efficacy research. This type of research is advantageous when investigating whether particular methods of intervention produce desired outcomes for specific problems.

In contrast, treatment effectiveness research involves investigating the impact of an intervention in typical clinical settings with typical clients who may have one or more comorbid conditions, rather than in controlled setting with very well-selected participants. When conducting clinical effectiveness research, the participants are generally referred for treatment rather than carefully screened and selected, and the intervention is implemented without many of the rigorous safeguards used in efficacy research. Continuing the example provided when discussing efficacy research, effectiveness research might involve applying a manualized treatment to all children who present to an outpatient mental health clinic with separation anxiety disorder. No attempts would be made to screen out children with comorbid conditions, and while a treatment manual may be available for use, treatment providers may not be mandated to follow strict guidelines for implementation.

Both efficacy and effectiveness studies are important in behavior modification and cognitive behavior therapy with children and adolescents. The former allow for a better understanding of which treatments are likely to be successful for particular types of presenting problems. Furthermore, efficacy research provides estimates as to the number of children with particular conditions that are likely to respond successfully and generates data that are useful for analyzing variables that may be related to treatment success or failure. Effectiveness studies expand on the findings of efficacy research by addressing how well interventions work in real-world settings. This type of research can be very helpful for determining the generality of results from efficacy studies. For instance, suppose efficacy research demonstrates that Treatment A is effective with children who have separation anxiety disorder. Effectiveness research might show that Treatment A has diminished, although still positive, effects when children present with comorbid conduct problems, thus helping establish the parameters of the intervention.

Intersubject Versus Intrasubject Research

Research methods can vary depending on whether they involve intersubject versus intrasubject analyses. Intersubject research involves investigating variation across subjects. The goal of this type of research is typically to learn about the aggregate, or mean, occurrence of a specified variable. Therefore, such research typically involves comparing groups of individuals on relevant dimensions. An example of intersubject research with childhood populations might be comparing the outcomes of 50 children diagnosed with attention-deficit/hyperactivity disorder (ADHD) who were treated with parent management training (PMT) to the outcomes of 50 children of similar age and gender also diagnosed with ADHD who were treated with PMT plus stimulant medication. By comparing the aggregate of the data collected on targeted clinical problems (e.g., tantrums), researchers can determine which intervention (i.e., PMT alone versus PMT plus stimulant medication) produced the greatest impact.

Intrasubject research involves investigating variation within subjects. In other words, repeated samples ofan individual's performance may be taken multiple times across several conditions, resulting in the ability to determine whether differences in performance exist across conditions. Research using intrasubject analysis can include either comparing repeated measurements of the variable of interest demonstrated by one person, and comparing that across conditions, or may involve comparisons of aggregate data obtained from a group of individuals all exposed to the same conditions. Since intrasubject research has been extensively used in cognitive-behavioral approaches to treatment, they will be discussed in more detail in the section on single-subject designs.

Experimental Versus Correlational Research

Research designs can also be distinguished by whether they provide information about experimentally determined relationships between the independent variable and dependent variable, or whether the results allow only for a determination of the strength of association (i.e., correlation) between different variables. In the first type of investigation (experimental), the investigator must be able to directly manipulate an independent variable (e.g., psychological treatment) so as to observe its impact on the dependent variable (e.g., clinical problem). Furthermore, the design must include a method by which to compare levels of the dependent variable during treatment and nontreatment conditions, so as to determine whether any differences exist. As such, it is necessary, but not sufficient, for the investigator to have tight control over all aspects of the study so as to eliminate sources other than the independent variable that might account for changes in the dependent variable.

In the second type of research (correlational), the goal is to evaluate the strength of association between two or more variables. This type of research may be most appropriate when it is not feasible (e.g., ethical, practical) to directly manipulate experimental conditions so as to observe changes in the dependent variables of interest. For example, it would be clearly unethical to expose children to various levels and types of child abuse so as to evaluate the impact of such experiences on emotional and behavioral wellbeing. Instead, to conduct such an investigation, one would need to obtain a sample of children who, through their unfortunate experiences, have experienced varied levels of exposure to such events, and then assess the degree to which the experiences are associated (i.e., correlated) with particular behavioral or emotional profiles. While this type of research does not allow for a determination of the causal role of abuse, it may be the only way to investigate certain aspects of children's lives.

SINGLE-SUBJECT DESIGNS

Single-subject designs (SSDs) are valuable for completing intrasubject research, in that their use is based primarily on demonstrating experimental control through replication of the impact of an independent variable on a case-by-case basis rather than by aggregation of obtained data across cases. When using SSDs, the individual serves as his or her own control, and performance is compared across different conditions. Given the nature of SSDs, they are particularly useful for experimental research designed to evaluate the impact of particular interventions on the behavior of an individual. In this way, their use is well suited for analyzing the relationship between particular independent variables and changes in dependent variables.

The various SSDs available share three important features. First, they involve repeated observation of the phenomena of interest, typically across time in standardized conditions. Specifically, observation of the dependent variable occurs multiple times (e.g., over several hours of a day or days of a week) across a minimum of two conditions: baseline (i.e., pretreatment) and treatment. Second, SSDs have in common the use of methods that involve replication of the experimental effect. Thus, with the exception of multiple baseline designs (see below for discussion of how this is accomplished using these types of designs), each participant in single-subject research is repeatedly exposed to baseline and treatment conditions. With each demonstration of changes in the dependent variable as a function of removing or adding the independent variable, one's confidence in the causal relationship between the two increases. Third, SSDs typically have in common the practices of changing only one variable at a time in order to observe the impact of that change on the phenomena of interest. For example, if a researcher was investigating effective interventions for treating nighttime bed-wetting, the investigator would want to introduce only one intervention (e.g., the urine alarm) at the start of the treatment phase. If more than one variable is changed, then it becomes impossible to determine which variable is operative in terms of producing change in the dependent variable.

SSDs can be divided into two distinct categories: withdrawal designs and stage-process designs. Each is discussed next.

Withdrawal Designs

Withdrawal designs typically involve a minimum of two phases (i.e., A, or baseline, and B, or treatment). The underlying strategy employed to demonstrate experimental control using the various withdrawal designs is the repeated introduction and withdrawal of the treatment. That is, after observing the dependent variable under baseline conditions, experimenters then implement a treatment phase. Take a child who engages in noncompliance as an example. After establishing the typical rate of noncompliance to adult instructions, the experimenter may then teach the parent a new method of giving instructions (i.e., simple, one-step instructions). Assuming a change in the dependent variable in the desired direction (that is, noncompliance decreases and compliance increases), the treatment is then withdrawn (i.e., return to baseline condition). If the removal of the treatment condition produces a return to baseline rates of the dependent variable, then experimental control over that variable has been demonstrated. Such a design is often referred to as an A-B-A design. In child behavior modification and cognitive-behavioral therapy research, withdrawal designs typically involve a return to the treatment condition so as to end the experiment in the condition that produces desired outcomes in targeted clinical phenomena. This type of withdrawal design (i.e., the A-B-A-B design) is perhaps the most common form of withdrawal design used in research.

Multiple variations of the simple principle of changing from baseline to treatment conditions exist, allowing flexibility in how one demonstrates experimental control of the independent variable over the dependent variable. For example, assume a situation in which it would not be ethical to sustain pretreatment conditions so as to establish base rates of the dependent variable (e.g., the child behavior of interest is dangerous to self or others, the behavior problem can no longer be tolerated by care providers). In this situation, the investigator may wish to use a B-A-B design, whereby the intervention is implemented immediately so as to (hopefully) improve the targeted behavioral disturbance as fast as possible. Or, assume a situation in which the investigator is interested in determining the relative effects of adding a second treatment component to an existing one. To accomplish this, an A-B-BC-B-BC design could be used. In this scenario, the Crepresents a second treatment component, and the investigator would be able to evaluate the change that is produced in the dependent variable by adding this component to the first intervention. To demonstrate this, consider the example provided earlier regarding increasing child compliance. After establishing rates of compliance under normal (i.e., A) conditions, the investigator teaches the care provider to give simple, direct one-step commands (i.e., B). While this might result in some increase in child compliance, perhaps the desired outcome was not achieved. As a result, the care provider is taught to also provide specific, labeled praise contingent upon compliance (i.e., C). The rate of compliance is then assessed when the combined treatment is in place (i.e., BC phase). After establishing the new rate of compliance, the second treatment is removed so as to evaluate whether rates of compliance return to levels previously obtained under the first treatment condition alone. Assuming a return to previous rates, the combined treatment could then be reintroduced so as to reestablish improved child compliance.

Stage-Process Designs

Although the flexibility of withdrawal designs allows their use in many research situations, there are conditions that prevent their appropriateness. For example, in some situations it may be unethical to withdraw a treatment (e.g., following elimination of a targeted behavior that produces serious harm to self or others) or perhaps the treatment produces lasting and “irreversible” change in behavior (i.e., once a child has been taught to read through intervention, removing the intervention will likely not result in the elimination of learned reading skills). In these situations, use of stage-process designs may be appropriate. Multiple stage-process designs exist, including multielements designs, multiple baseline designs, and changing criterion designs.

The multielements design (also referred to as the simultaneous treatment design or the alternating treatments design) differs from other SSDs in that multiple conditions (i.e., baseline and treatment conditions, two or more treatment conditions) are conducted in rapid succession, with the order of presentation typically determined through random selection and compared against each other. For example, perhaps baseline conditions are in effect on one day, treatment conditions the next, and so forth.

With the multielements design, often there are three phases: baseline, comparison (rapid alternation between two or more conditions), and the use of the effective intervention. In some situations, however, the baseline might not be necessary. This may be particularly true if one of the comparison conditions is a baseline condition. One treatment condition is judged to be superior if it produces data patterns in the expected direction at a level that is greater than other conditions. Another component of the multielements design is that an equal number of sessions of each condition should be conducted. To ensure discriminated responding across conditions, researchers often pair separate but salient stimuli with each condition. This design may be particularly useful if the investigator is comparing interventions that have an immediate effect and when the dependent measure is particularly sensitive to changes in stimulus conditions (i.e., reversible).

A second type of stage-process design is the multiple baseline design. There are three types of multiple baseline designs: (1) multiple baseline across behaviors, (2) multiple baseline across persons, and (3) multiple baseline across settings. With the multiple baseline design across behaviors, the investigator evaluates the impact of an intervention across different behaviors emitted by the same person. As such, this is a within-subjects design. The intervention is applied sequentially to the different (presumably) independent behaviors. The second design—multiple baseline across persons—involves the evaluation of the impact of a particular intervention across at least two individuals matched, according to relevant variables, who are presumed to be exposed to identical (or at least markedly similar) environments. Finally, with the multiple baseline across settings design, a particular intervention is applied sequentially to a single participant or group of participants across independent environments (e.g., home and school).

Technically, there must be at least two separate dimensions (i.e., behaviors, persons, or settings) present to utilize a multiple baseline design, although convention suggests a minimum of three or more. Multiple baseline designs are characterized by the presence of only two conditions: baseline and treatment. Treatment is introduced in such a way that one is able to evaluate experimental control of the independent variable. With these designs, the baseline condition is extended for increasing lengths of time as the intervention is introduced with the other dependent variables. Thus, these designs are particularly useful for studying irreversible effects, because replication is achieved without withdrawal and reintroduction of the independent variable. Experimental control is inferred based on the comparison of nontreated dependent variables as compared to the treated variables and thus is not demonstrated directly.

The changing criterion design is characterized by the presence of only one baseline and one treatment phase. However, implementation of the treatment condition involves the sequential introduction of different performance goals. In other words, the treatment phase is applied until the targeted dependent variable achieves a specified level of performance. At that time, the goal (i.e., criterion) of performance is altered and the intervention continues until the behavior again achieves the desired level. Changes in the criterion occur until the dependent measure is occurring at the desired terminal level. As such, the changing criterion design is particularly well suited for situations in which the investigator is interested in evaluating shaping programs that are expected to result in increases or decreases in the dependent measure (e.g., increase amount of seatwork a student will complete prior to needing a break). Evaluation of the intervention as the causal agent occurs through two comparisons: between the occurrence of the dependent measure during baseline and treatment, and between the occurrence of the dependent measure across the different levels of the intervention. If the dependent variable changes in the desired direction only when the criterion changes, then the investigator can have confidence in the controlling nature of the independent variable.

GROUP DESIGNS

In contrast to single-subject designs that focus primarily on idiographic approaches to researching human phenomena, group designs take a decidedly nomothetic approach to understanding general laws and principles of human behavior. In this way, group designs involve comparing aggregated data from two or more groups exposed to different experimental conditions. Group designs may be used when conducting correlational or experimental research. Two general categories of group designs exist: between-groups comparison designs and within-groups comparison designs. Each is explored next, followed by a discussion of other group designs that are based on the general methods of between-groups and within-groups designs.

Between-Groups Comparison Designs

Between-groups comparison designs involve investigating how different groups of individuals perform when exposed to some form of experimental manipulation. At their most basic, between-group designs involve two groups: an experimental group (i.e., typically exposed to a treatment, or independent variable) and a control group (i.e., typically exposed to no experimental manipulation). An important hallmark of such designs is group equivalence on important variables (e.g., gender, age, type and/or level of pathology), so that experimenters can be confident that any differences between groups is due to experimental conditions instead of preexisting differences in group members. Equivalence is achieved either through random assignment of participants to groups or through matching group members based on their presentation of specific characteristics hypothesized to be important to the investigation. As an example of matching, investigators researching the difference between juvenile sex offenders and nonoffending delinquent peers in terms of emotional and/or behavioral health might match participants based on age, history of abuse, and presence of psychopathology, all of which have been shown in research to correlate with emotional and behavioral outcomes. If group equivalence is achieved, either through random assignment or matching, any differences between the groups on the dependent variables being investigated are presumed to be the result of the differences in experimental conditions to which the group members were exposed.

Several different between-group comparison designs have been described in the literature. Perhaps the most common version of this type of design is the pretest-posttest control group design. This design typically includes two groups (i.e., an experimental group that is exposed to an intervention and a control group that is not), with participants randomly assigned to groups. Participants of both groups are tested both prior to the treatment (i.e., pretest) and following treatment (i.e., posttest). For members of the control group, posttest assessment typically occurs once the amount of time has passed that it took to implement the treatment with the members of the experimental group. This design is particularly popular in treatment research, because the format allows for an assessment in the changes of psychological symptoms in the presence and absence of treatment, and any difference in the amount of change between groups can be attributed to the independent variable.

The posttest-only control group design is another between-groups comparison design available for researchers investigating cognitive behavior therapy and behavior modification with children. This design is essentially the same as the pretest-posttest control group design, with one major exception. With this design, no assessment is conducted prior to implementing the treatment with the experimental group. The posttest-only design is practical when it is not feasible to conduct pretest assessment. Examples of situations that might preclude utilizing pretest assessment include a concern that exposing participants to pretest measures will sensitize them to aspects of the experiment and thus skew any results, or when it is too expensive or time-consuming due to a large sample size.

Factorial design is another example of betweengroup comparison designs and is used when the investigator is interested in assessing the influence of two or more independent variables, or two or more levels/ variations of one independent variable. For example, perhaps an investigator is interested in evaluating a classroom-based reinforcement program for sustained attention to task for children diagnosed with ADHD. However, the investigator may not be sure of the optimal level of treatment. Using a factorial design, the investigator could compare three versions of the same treatment (i.e., hourly reward, daily reward, weekly reward) to see which most effectively increases attention to task. When using a factorial design, as many levels and independent variables as is desired may be used to address the question at hand. In this way, use of the factorial design can be an economical means of conducting research.

Within-Groups Comparison Designs

While between-group comparison designs are useful because of their flexibility and statistical power (i.e., the probability of detecting an effect of treatment when it occurs), at times their use may be precludeddue to practical difficulties related to creating group equivalence. Moreover, certain research questions are best answered by exposing the same individuals to different experimental conditions (i.e., conditions containing different independent variables or conditions with different levels of the same independent variable). In these situations, use of within-groups comparison designs, sometimes referred to as repeated measures designs, may be appropriate. Several withingroups designs exist, including within-groups design to assess multiple levels of one independent variable, pretest-posttest within-groups designs, and factorial within-groups designs. Each is discussed next.

Investigators are often interested in assessing the differing amount of influence of various levels of one independent variable on the dependent variable of interest. While this is possible to do with betweengroups designs, the number of participants required for statistical power increases with each level of the independent variable. Continuing the example provided earlier, assume the researcher is interested in investigating the influence of three levels of a reinforcement program for attention problems in children with ADHD. If it is determined that 60 participants are needed in each group for the test to be powerful, then the total number of participants needed would be 180 (i.e., 60 × 3 groups). If obtaining this large number of participants is not feasible, then using a within-groups design is an appropriate alternative.

To use a within-groups comparison design to assess the influence of different levels of the independent variable, all participants would be exposed to each variation of the experimental condition, and measures of the dependent variable(s) would occur at least one time in each variation. Investigators would then compare the obtained measures of the dependent variable(s) across the different levels of the independent variable so as to determine which level had the greatest impact.

Another type of within-groups design is the pretest-posttest within-groups design. Using this approach, a pretest is given to all participants to measure the dependent variable(s). Following the pretest, all participants are exposed to an experimental condition that contains only one independent variable (or one level of an independent variable). Following this, the posttest is given so as to reassess the dependent variable(s). Although this type of design might be beneficial in certain situations (e.g., smaller-scale programs, unethical to create a control group that is not exposed to treatment), it lacks safeguards against internal validity because of the lack of a control group. In other words, it does not allow one to determine if other extraneous variables accounted for any changes in the dependent variable.

The factorial within-group design is similar to the between-groups factorial design in that it involves an examination of two or more independent variables, or two or more levels of the same independent variable, in the same experiment. Furthermore, as with the between-groups version, the number or level of independent variables is theoretically only limited by the feasibility of the study itself. It differs from the between-groups version in that each participant is exposed to the different independent variables.

Other Group Designs

Mixed designs blend aspects of between-groups and within-groups comparison designs. One example of a category of a mixed design is the counterbalanced designs. These designs are used to eliminate the influence of order effects on the dependent variable, which are inherent with within-groups designs. Counterbalanced designs involve ensuring that order effects are distributed equally across groups with approximately an equal number of participants in each group receiving the treatment conditions in the same order. Using the example of using various levels of reinforcement to increase attention to task, counterbalancing might look like the following: Group 1 exposure order is Level 1, Level 2, and Level 3; Group 2 exposure order is Level 2, Level 3, Level 1; and Group 3 exposure order is Level 3, Level 1, Level 2. Counterbalanced designs include either complete counterbalancing (i.e., all potential variations of the order of presenting the independent variable are used) or incomplete counterbalancing (i.e., ensuring that a portion of all possible orders of presentation are presented to roughly an equivalent number of participants).

A mixed factorial design involves combining between-groups and within-groups comparisons into the same investigation. Using this design, there is both a between-groups comparison and a within-groups comparison. For example, perhaps the investigator completing the study described above on increasing attention to task is interested in the influence of gender on treatment effectiveness. The participants would be divided based on gender (i.e., male versus female is the between-group comparison). In addition, withineach group, participants would be exposed to different levels of the reinforcement program. Using this design, the investigator can analyze the influence of gender, levels of reinforcement, and the interaction of gender and levels of reinforcement on attention to task.

To research the ways in which youth change over time, such as when investigating developmental pathways for psychological conditions, experimenters often rely on research designs that allow for gathering data over extended periods of time when children are at different ages. Two different types of designs are commonly used for this type of research: crosssectional and longitudinal designs. Cross-sectional designs involve studying children at different ages at the same point in time and comparing them against each other. For example, a researcher interested in the behavioral expression of ADHD across childhood might compare groups of children who are 5, 10, and 15 years of age on several measures of inattention, distractibility, and impulsivity/hyperactivity. As such, this is a between-groups comparison design. Using a longitudinal approach, the same set of individuals is studied over an extended period of time. Continuing from the example provided with cross-sectional research, a longitudinal approach would involve assessing the same group of children at ages 5, 10, and 15 and comparing the resulting data within the group across time. Thus, longitudinal research is typically considered a type of within-groups comparison design.

SUMMARY

Various research designs exist to assist investigators in analyzing phenomena of interest. Many common research designs were highlighted, within the context of conducting research on child cognitive behavior therapy and behavior modification, including single-subject and group comparison designs. Other research methodologies exist, including meta-analytic approaches and qualitative analyses. Experimenters investigating childhood disorders need to select the design that is most appropriate to the experimental question and goals of the research being conducted, taking into account various pragmatic constraints.

In addition to being familiar with the various designs available, it is important for researchers to be knowledgeable about other aspects of conducting scientific research. These include, but are not necessarily limited to, ethical considerations, legal requirements for conducting research with human participants, and threats to internal and external validity and methods of reducing these threats. By attending to these and related issues, investigators can further the understanding of cognitive-behavioral and behavior modification approaches to the assessment and treatment of childhood psychopathology.

—Kurt A. Freeman and Eric J. Mash

Further Readings

Entry Citation:

"Research Designs." Encyclopedia of Behavior Modification and Cognitive Behavior Therapy. 2005. SAGE Publications. 22 Sep. 2009. <http:///cbt/Article_n2104.html>.

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多