Oracle America, Inc. v. Google Inc., No. 3:2010cv03561 - Document 1782 (N.D. Cal. 2016)

Court Description: MEMORANDUM OPINION RE ORACLE'S MOTION IN LIMINE NO. 5 TO EXCLUDE TESTIMONY OF GOOGLE'S SURVEY EXPERT DR. ITAMAR SIMONSON. Signed by Judge William Alsup on 5/2/2016. (whasec, COURT STAFF) (Filed on 5/2/2016)

Download PDF
Oracle America, Inc. v. Google Inc. Doc. 1782 1 2 3 4 5 6 IN THE UNITED STATES DISTRICT COURT 7 FOR THE NORTHERN DISTRICT OF CALIFORNIA 8 9 ORACLE AMERICA, INC., 11 For the Northern District of California United States District Court 10 12 13 14 Plaintiff, v. MEMORANDUM OPINION RE ORACLE’S MOTION IN LIMINE NO. 5 TO EXCLUDE TESTIMONY OF GOOGLE’S SURVEY EXPERT DR. ITAMAR SIMONSON GOOGLE INC., Defendant. / 15 16 17 No. C 10-03561 WHA INTRODUCTION In this copyright infringement action involving Java and Android, plaintiff moves to 18 exclude the survey and opinion of defense expert Dr. Itamar Simonson. The final pretrial order 19 held that Google could offer Simonson’s testimony subject to the following limitations. 20 Simonson must make clear that his survey was directed at the factors that developers consider 21 in general when determining which platform to develop for, and he may not offer any 22 conclusion about whether that general proposition is specifically applicable to 2007–08. 23 Simonson may not opine about the meaning that survey respondents attributed to the ambiguous 24 and overlapping terms “popularity,” “established user base,” or “market demand.” Simonson 25 must adjust his testimony to reflect only the conclusions in his survey without the inclusion of 26 pre-testing results. 27 This memorandum opinion explains the reasoning for that ruling. 28 Dockets.Justia.com Dr. Itamar Simonson conducted a survey “to assess the key drivers of application 3 developers’ decisions whether to develop applications for a mobile platform” (Simonson 4 Rpt. ¶ 10). He identified four conclusions based on the survey. First, expected demand and 5 profitability are “by far” the most important factors considered by developers. Second, prior 6 familiarity with a programming language is, “at most, a minor consideration for the 7 overwhelming majority of application developers.” Third, “[t]he great majority of application 8 developers” are confident they can learn new programming languages to meet user demand 9 for applications. Fourth, the fact that iOS application developers were willing to learn new 10 languages provides “further evidence” that economic considerations are more important than 11 For the Northern District of California STATEMENT 2 United States District Court 1 prior familiarity with a programming language (id. ¶ 12). Google proffers Simonson’s survey 12 to rebut Oracle’s claim for disgorgement of Google’s profits from Android by suggesting 13 that familiarity with Java did not in fact motivate developers to develop for Android (thus 14 minimizing the importance of the declaring code and SSO of the 37 API packages at issue). 15 Simonson began with a list of over 5,500 developers, from which he randomly 16 selected 152 to survey. The respondents were interviewed by phone using the 17 Computer-Assisted-Telephone Interviewing technique. To actually participate in the survey, 18 respondents had to meet four initial screening criteria. First, they had to develop applications 19 for smartphones or tablets. Second, they had to “make or influence” decisions on “whether 20 to develop new applications.” Third, they had to develop applications for at least one of four 21 major mobile platforms. Fourth, neither they nor members of their household could work for a 22 market research firm, advertising agency, or public relations firm (id. ¶¶ 18, 20, 24). 23 Simonson pre-tested his questionnaire with twenty-three respondents (id. ¶ 22). Based 24 on pretest results, he made two changes. First, he added a question: “In general, do you make 25 decisions about which applications to develop independently, or as part of a team of application 26 developers?” (id. ¶ 22, Exh. E). Second, he rephrased part of a question from “Please rate your 27 capability to develop and establish in the market a completely new programming language” to 28 “Please rate your capability to develop and establish a completely new programming language 2 1 in the market” because the pretest suggested some respondents misinterpreted the question 2 (ibid.). We remain uninformed on what this misinterpretation was or how it may have affected 3 pretest responses. Simonson included the pretest results in his final results. 4 The survey was administered by experienced interviewers from Target Research Group. 5 The interviewers, research firm, respondents, and staff who coded respondents’ open-ended 6 answers were “blind” as to the study’s purpose and the identity of its sponsor. Field Solutions, 7 an independent research firm, conducted a validation survey, reached 149 of the 8 152 respondents, and discovered no discrepancies in the results (id. ¶¶ 14, 23). ANALYSIS 9 An expert witness may provide opinion testimony “if (1) the testimony is based upon 11 For the Northern District of California United States District Court 10 sufficient facts or data, (2) the testimony is the product of reliable principles and methods, and 12 (3) the witness has applied the principles and methods reliably to the facts of the case.” Fed. R. 13 Evid. 702. District courts are charged with a “gatekeeping role” to ensure that expert testimony 14 admitted into evidence is both reliable and relevant. Sundance, Inc. v. DeMonte Fabricating 15 Ltd., 550 F.3d 1356, 1360 (Fed. Cir. 2008); see Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 16 579, 589 (1993). 17 18 Oracle raises several objections to Simonson’s survey. This memorandum addresses each in turn. 19 1. 20 Oracle points out that evidence from Google’s own internal documents indicates Google 21 copied parts of Java APIs specifically to tap into the Java developer community, suggesting that 22 Google believed prior familiarity with the programming language used was more attractive than 23 the promise of profits to developers. Oracle claims this “completely” contradicts Simonson’s 24 conclusions, and that Simonson’s survey is thus both irrelevant and unreliable. Not so. 25 GOOGLE’S INTERNAL DOCUMENTS. The evidence Oracle cites might indicate that Google believed a familiar programming 26 language would play a significant role in attracting developers. However, the question 27 addressed by Simonson’s survey was not whether Google believed prior familiarity with a 28 programming language was an important consideration for developers, but whether developers 3 1 thought of it as such. Thus, contrary to Oracle’s assertion, evidence of Google’s strategic 2 predictions does not “completely” contradict Simonson’s conclusions (even though it 3 contradicts those conclusions in part). Oracle suggests discrepancies between the two must 4 mean Simonson’s conclusions are unreliable, but they could also simply indicate that Google’s 5 predictions of what motivated developers were wrong. Moreover, insofar as evidence of 6 Google’s strategic considerations tends to contradict Simonson’s conclusions, such evidence 7 speaks to the weight of his opinion, not its admissibility. See Daubert, 509 U.S. at 596 8 (“presentation of contrary evidence” is a “traditional and appropriate means of attacking shaky 9 but admissible evidence”). 11 For the Northern District of California United States District Court 10 12 The Court suspects that Simonson will have a hard time on cross explaining away Google’s own contrary comments, but his survey cannot be excluded simply on that ground. Oracle suggests that Simonson’s potential to mislead the jury outweighs his probative 13 value. Specifically, Oracle claims “Google would use that survey to trick jurors into rejecting 14 Oracle’s powerful evidence from Google of why Google copied” (Pl.’s Reply MIL No. 5 at 1). 15 However, as noted above, Oracle’s evidence from Google tends to show only what Google 16 perceived and believed about developers’ motivations. It is only one way of getting at the 17 greater issue of whether and to what extent Google’s copying of the declaring code and SSO 18 (structure, sequence, and organization) of the 37 APIs at issue drove Android’s success. 19 Simonson’s survey of developers is another way. It is not quite “trickery” for Google to present 20 competing evidence against Oracle on a factual dispute at issue in this case. Arguments on how 21 “powerful” or persuasive this competing evidence is must be directed to the jury. 22 Oracle also suggests Simonson’s survey is irrelevant because it deals with only 23 “one of several ways Oracle shows a causal nexus between Google’s infringement and the 24 Android-related profits,” but this argument goes to the survey’s weight, not its relevance or 25 admissibility (see id. at 1–2). That fact that other evidence might also be relevant does not 26 in and of itself undermine the survey’s relevance. Oracle cites Dr. James Kearl, the 27 court-appointed damages expert, for the proposition that Simonson’s survey is irrelevant 28 to the issue of damages (specifically, disgorgement of profits) because whether Google’s 4 1 copying in fact attracted developers is “a different question” from whether “Google thought it 2 needed [Java] at launch” (id. at 3). However, Kearl also said that the jury would need to weigh 3 the effect of any conclusion that consumer demand for Android attracted developers (rather than 4 the converse). While Simonson’s survey and Oracle’s evidence from Google do present 5 different questions, both are ultimately relevant to disputed facts at issue in this case. 6 2. SURVEY QUESTIONS. 7 Simonson’s survey was administered in December of 2015 and January of 2016 8 (Simonson Rpt. ¶¶ 22–23). However, Android’s launch period was in 2007–09. Oracle claims 9 that in 2007–09, the applications market was in its infancy and no one knew if developing applications would be profitable; now, however, the market is well-established, so developers 11 For the Northern District of California United States District Court 10 are more likely to invest in new platforms. Thus, Oracle argues, Simonson’s 2015–16 survey 12 fails to represent marketplace conditions in 2007–09. 13 Google admits Simonson operated on the premise that specific market conditions 14 would not affect developers’ decisions, so there was no need to recreate specific market 15 conditions in his survey. Oracle contends, however, that specific market conditions do in fact 16 affect developers’ decisions. Oracle cites the report of Dr. Olivier Toubia, an expert retained 17 by Oracle to analyze and respond to Simonson’s survey, as support for its contention. 18 Specifically, Oracle cites paragraphs 26–36 in Toubia’s report for the proposition that the 19 applications market today differs drastically from the market in 2007–09, such that developers 20 today are more likely to invest in new platforms than they were in 2007–09. Toubia’s report, 21 however, does not support Oracle’s claim. Some cited paragraphs broadly critique Simonson’s 22 failure to recreate or account for the specific 2007–09 historical context for his survey (Toubia 23 Rpt. ¶¶ 26, 28, 36). Others generally assert that the applications market has undergone 24 substantial changes since Android’s launch (id. ¶¶ 29–30, 34). Still others suggest the phrasing 25 of Simonson’s questions could have been confusing or misleading to some respondents 26 (id. ¶¶ 28, 35). Approximately half of the portion of Toubia’s report cited by Oracle essentially 27 parrots Oracle’s argument that Google’s copying of the declaring code and SSO of the 37 API 28 packages was an important driver of Android’s success (id. ¶¶ 27, 29, 31–34). 5 In short, nowhere does Toubia actually show, as Oracle claims, that developers today 1 2 are more likely to invest in new platforms than they were in 2007–09. Oracle has thus 3 presented no evidence for the proposition that developers’ motivations are different today than 4 they were in 2007–09. In other words, Oracle has not successfully challenged Simonson’s 5 premise that a survey of developers in 2015–16 is relevant to, and probative of, the question of 6 what motivated developers in 2007–09. Oracle cites Kwan Software Eng’g, Inc. v. Foray Techs., LLC, No. C 12-03762 SI, 2014 7 8 WL 572290, at *4–5 (N.D. Cal. Feb. 11, 2014) (Judge Susan Illston), for the proposition that 9 failure to approximate actual marketplace conditions can provide grounds for inadmissibility. As discussed below, however, Kwan is distinguishable. Simonson’s survey is probative of 11 For the Northern District of California United States District Court 10 developers’ motivations for developing for a new platform in general, though the weight 12 accorded his conclusions may be diminished by contrary evidence that developers’ motivations 13 have changed with the market. 14 Oracle also points out that Android was not yet popular with users in 2007–09, although 15 Simonson’s survey found a platform’s popularity was the most important factor for developers. 16 Thus, Oracle contends, Simonson’s survey and conclusions should be excluded for failing to 17 address the question he purports to answer. Not so.* Simonson’s methodology was to ask respondents to list and rank the importance of 18 19 various decision-making factors (Simonson Rpt. ¶¶ 40, 48, 57). Oracle fails to undermine the 20 adequacy of Simonson’s methodology for addressing the question of what motivates developers 21 to develop for a particular platform. Oracle’s argument is essentially that because Simonson’s 22 methodology produced one particular finding that is unhelpful to the ultimate purpose of the 23 survey, the entire survey should be excluded. This argument is meritless. If Simonson has 24 evidence that popularity is a main consideration for developers, and Oracle has evidence that 25 26 27 28 * Oracle bases this argument on paragraphs 40, 47, and 58 in Simonson’s survey. Paragraph 47 does not assert the conclusion Oracle challenges; it states only that 116 of 152 respondents started developing Android applications at some point between 2007 and 2015 (Simonson Rpt. ¶ 47). Oracle likely meant to refer to paragraph 48, which explains that 66 of the 116 respondents who developed Android applications identified “User base/Market share/Demand/Popularity/ROI [return on investment]” as their first consideration (id. ¶ 48). 6 1 Android was not popular in 2007–09, both can be presented to the jury to consider as they see 2 fit in determining what motivated developers to develop for Android in 2007–09. 3 Moreover, Simonson’s survey as a whole would still be relevant because it 4 identified and weighed the relative importance of multiple factors affecting developer decisions. 5 In claiming Android initially had no user base, Oracle states, “developers had to be motivated 6 by something else” (Pl.’s Reply MIL No. 5 at 1). Insofar as Simonson’s survey is probative 7 of what that “something else” might be and finds that “something else” was likely not prior 8 familiarity with the programming language, it is relevant to factual disputes at issue in this case. 9 Oracle neglects to even mention this other key finding of Simonson’s survey: that prior familiarity with a programming language is not an important consideration to developers in 11 For the Northern District of California United States District Court 10 general (Simonson Rpt. ¶¶ 40, 48, 57). This finding is relevant to, and probative of, the issue 12 of whether and to what extent Google’s copying drove Android’s success. That the same 13 methodology which produced this finding also produced other, perhaps less probative findings 14 does not warrant exclusion of Simonson’s entire survey and conclusions. 15 Simonson’s survey, however, did not attempt to parse out the various components of a 16 platform’s “popularity,” nor did it attempt to examine any of the other factors identified as 17 significant by developer respondents. For example, the survey did not define, much less 18 explain, what constitutes an “established user base,” or consider what factors might contribute 19 to market demand for a particular platform. The survey thus provides insufficient basis for any 20 expert opinion as to why Android, or any platform, was or was not popular at any given point in 21 time. Disputed facts at issue in this case, however, include whether, when, and to what extent 22 Google’s copying of the declaring code and SSO of 37 APIs contributed to Android’s overall 23 success, including its popularity, user base, and market presence. Due to this potential overlap 24 in common terminology, Simonson’s survey and opinions could confuse or mislead the jury. 25 Therefore, Simonson is expressly prohibited from attempting to define or analyze specific 26 factors like “popularity,” “established user base,” or “market demand” in his survey results, 27 insofar as those terms were not specifically defined or analyzed in the survey questionnaire. 28 See Fed. R. Evid. 403. Simonson is also specifically prohibited from opining as to whether or 7 1 how specific factors contribute to a platform’s overall success. See id. This prohibition does 2 not, however, limit Simonson’s ability to testify as to his survey results to the extent they 3 indicate what factors developers in general consider in deciding whether to develop for a 4 particular platform. 5 3. SURVEY RESPONDENTS. 6 Oracle raises two objections to Simonson’s survey sample. First, Oracle points out 7 that most developers surveyed were not developing applications for Android in 2007–09. 8 Second, Oracle contends Simonson’s screening for respondents was too broad because he 9 included not only developers who actually decided which platforms to develop for, but also those who only influenced such decisions. Oracle essentially claims the only “proper universe” 11 For the Northern District of California United States District Court 10 of people for this survey would have been developers who actually made the decision to 12 develop applications for Android in 2007–09. 13 The standard for a “proper universe” of respondents, such that a survey would be 14 sufficiently reliable to be admissible, is not as demanding as Oracle claims. Oracle cites three 15 decisions to support its position: Kwan, 2014 WL 572290; ThermoLife Int’l, LLC v. Gaspari 16 Nutrition, Inc., No. CV–11–01056–PHX–NVW, 2014 WL 99017 (D. Ariz. Jan. 10, 2014) 17 (Judge Neil V. Wake) (vacated and remanded); and Reinsdorf v. Skechers U.S.A., 922 F. Supp. 18 2d 866 (C.D. Cal. Feb. 6, 2013) (Judge Dean D. Pregerson). Each decision is distinguishable. 19 Moreover, as explained below, the standard Oracle proposes for survey admissibility was 20 recently rejected by the Ninth Circuit when it overruled the ThermoLife decision. 21 In Kwan, 2014 WL 572290, at *4–5, the court excluded an expert’s survey and opinions 22 that were proffered to support a false advertising claim arising from advertising for photo 23 software. The survey purported to show that the advertisements at issue were likely to mislead 24 or confuse consumers. However, the survey did not focus on potential users of the software; 25 its respondents were not even people who would see the alleged misrepresentations, much less 26 potential purchasers of the software. The proffering party “made no attempt to show” the 27 survey’s probative value despite its unrepresentative sample. Id. at *5. The survey was thus 28 inadmissible because the proffering party had not shown that it was relevant or reliable. 8 1 In contrast, Simonson ensured that at least half of his respondents developed 2 applications specifically for Android (Simonson Rpt., Exh. E). He also compared the responses 3 of Android developers to those of developers for other platforms, and found them to be 4 consistent with each other (Simonson Rpt. ¶ 49) This analysis showed no significant 5 distinctions between the motivations of Android developers and developers in general, such 6 that the survey would be unacceptably unrepresentative. Moreover, unlike the expert in Kwan, 7 Simonson does not purport to draw specific conclusions (i.e., about the motivations of Android 8 developers in 2007–09), but offers more general conclusions about developers’ motivations in 9 general (id. ¶ 12). His conclusions are thus adequately supported by his methodology. In ThermoLife, 2014 WL 99017, at *2, the court excluded an expert’s survey and 11 For the Northern District of California United States District Court 10 opinions that purported to determine whether certain statements about a product affected 12 consumers’ buying decisions. The survey did not state when it was conducted or how 13 participants were solicited. It made no attempt to show that survey respondents were 14 representative of potential consumers of the products at issue. Specifically, survey respondents 15 included consumers who could not have used the specific product at issue for at least two years 16 at the time of the survey. Survey questions were worded to obtain a biased response favorable 17 to the proffering party. And the conclusions the expert drew from the survey exceeded the 18 scope of the survey’s findings in favor of the proffering party. 19 Unlike the survey in ThermoLife, Simonson’s survey explained how it was conducted 20 and how participants were solicited (Simonson Rpt. ¶¶ 18, 24). As described above, the 21 survey attempted to show that its respondents were representative of the studied population. 22 The survey questions were not worded to obtain biased responses favorable to the proffering 23 party (id., Exh. E). And, as explained above, Simonson’s conclusions do not exceed the scope 24 of his survey. 25 Notably, the Ninth Circuit recently vacated and remanded the ThermoLife decision, 26 finding among other things that the district court improperly excluded the proffered survey and 27 accompanying expert opinion evidence. Thermolife Int’l v. Gaspari Nutrition, No. 14-15180, 28 2016 U.S. App. LEXIS 6807, at *4–7 (9th Cir. Apr. 14, 2016). Specifically, the Ninth Circuit 9 1 concluded that “[a]lthough the district court faulted the survey’s biased questions and 2 unrepresentative sample, neither defect was so serious as to preclude the survey’s 3 admissibility.” Id. at *6. Objections based on such defects went only to the weight, not the 4 admissibility, of the survey. Moreover, the court explicitly observed that the survey included 5 respondents from both what the district court deemed the relevant consumer class, and a more 6 general consumer population that was merely probative of the specific class at issue. The court 7 found this mixed sample “did not severely limit the probative value of the survey’s results.” 8 Ibid. (internal citations omitted). 9 In Reinsdorf, 922 F. Supp. 2d at 873, the court excluded an expert’s survey and opinions that purported to test brand recognition but were proffered as evidence that “one can fairly 11 For the Northern District of California United States District Court 10 easily parse how much of the audience appeal of the work originates from the various 12 elements.” The survey provided no basis to indicate how its sample was selected, or why its 13 respondents were representative of the relevant population. The survey format used images 14 that produced biased, unreliable results, and provided respondents with no basis for meaningful 15 brand comparison. The proffering party made “virtually no attempt to defend [the expert’s] 16 methods,” and could not identify any scientific principles underlying the survey, which 17 appeared to violate numerous accepted practices in the field of survey research. Id. at 878–79. 18 Simonson’s survey does not share the flaws of the survey in Reinsdorf. As explained 19 above, he does not purport to draw conclusions beyond the scope of his survey. The survey 20 itself explained how its sample was chosen, and why its respondents were representative of 21 the studied population. The survey format was not designed to produce biased results. Google, 22 unlike the proffering party in Reinsdorf, defends Simonson’s methods. Simonson identified the 23 scientific principles underlying his survey (Simonson Rpt. ¶¶ 17, 23–24). And the survey did 24 not appear to violate numerous accepted practices in the field of survey research. 25 Oracle does not dispute that Simonson’s randomly selected sample of 152 developers is 26 representative of the mobile application developer population (see id. ¶ 10). Oracle’s objection 27 is essentially that the motivations of these 152 developers are not representative of the 28 motivations of decision-making Android developers in 2007–09. However, none of the 10 1 decisions cited by Oracle go so far as to suggest that a survey is inadmissible unless its sample 2 was exactly representative of the studied population within the precise timeframe at issue. 3 In fact, in its decision remanding ThermoLife, our court of appeals explicitly rejected such 4 an approach, holding the district court abused its discretion where it excluded a survey because, 5 among other defects, the sample included both directly relevant respondents and respondents 6 who were only generally probative of the relevant population. Oracle’s argument essentially 7 relies on the reasoning of the ThermoLife decision, now rejected by the court of appeals. 8 That error will not be repeated here. 9 Therefore, as long as Simonson does not purport to draw conclusions specific to Android developers in 2007–09, his survey sample did not need to be limited to respondents 11 For the Northern District of California United States District Court 10 from that population in order to produce reliable results. As a precaution, Simonson will be 12 required to clarify that his survey results indicate the motivations of developers in general, not 13 the specific motivations of Android developers within the 2007–09 timeframe. Simonson may 14 attempt to explain why and how his findings and conclusions are nonetheless probative of what 15 motivated Android developers in 2007–09, subject to cross-examination and the presentation of 16 contrary evidence. 17 Oracle further argues that making an independent decision to develop for a platform 18 is different from influencing a decision to develop for a platform, but it is unclear how this 19 distinction would render Simonson’s survey inadmissible. Simonson’s survey and opinion 20 purport to show what attracts developers to a platform. The motivations of “influencing” 21 developers may be less probative of this issue than the motivations of “decision-making” 22 developers, but they are still probative insofar as they contributed to the overall attractiveness 23 of a platform to developers. 24 Oracle also provides no basis for the suggestion that Android developers have different 25 motivations than developers in general in choosing which platform to develop for. Moreover, 26 Simonson’s four ultimate conclusions do not purport to be specific to Android developers 27 (Simonson Rpt. ¶ 12). Rather, his conclusions speak to the motivations of developers in 28 general — which is appropriate given his survey sample. He specifically ensured that at least 11 1 half of his sample consisted of Android developers to show that Android developers’ 2 motivations do not differ significantly from developers’ motivations in general, and to 3 demonstrate the probative value of his survey (see id. ¶ 49, Exh. E). If there is other admissible 4 evidence of discrepancies between the motivations of Android developers and those of other 5 developers, such evidence could be presented to challenge the weight of Simonson’s survey and 6 opinion at trial. Unless the motivations of developers in general shared no significant overlap 7 with those of Android developers, such discrepancies would not invalidate Simonson’s survey 8 so as to render it inadmissible. However, if Simonson attempts to testify at trial about new 9 conclusions specific to Android developers that are not adequately supported by his survey methodology, Oracle may object at that time. 11 For the Northern District of California United States District Court 10 4. SURVEY TIMEFRAME. 12 Oracle also argues that respondents in the survey who developed applications in 13 2007–09 are unlikely to remember the details of their decision-making processes from that time. 14 While not explicit, the point of this argument is presumably that Simonson’s survey is 15 unreliable because its results are based on unreliable memories. Oracle contends, and Toubia’s 16 report echoes, that well-accepted survey methodology discourages surveys that purport to study 17 things that happened long ago (Toubia Rpt. ¶¶ 21–25, 37). These criticisms appear targeted to 18 Questions 5 and 6, which asked respondents what year they started offering mobile applications, 19 and what factors or considerations led to their decision to develop those applications for specific 20 platforms (id., Exh. E). 21 Google and Simonson defend these questions by claiming decisions to develop for a 22 new platform are “high involvement” or major decisions that people tend to remember well, 23 relative to their memories of “autobiographical” information. Both Oracle and Google cite to 24 two articles for the general proposition that autobiographical memories deteriorate over time. 25 Contrary to Oracle’s claim that Google does not refute the literature cited by Toubia, Google 26 contends that literature on autobiographical memory is inapplicable in this situation because the 27 decision to develop for a new platform is not an “autobiographical” event. Toubia also cites 28 two of Simonson’s own articles for the proposition that recall issues can interfere with research 12 1 results (Toubia Rpt. ¶ 25). One of those articles specifically noted that the ease with which 2 consumers choose between options affects how they remember the positive and negative 3 components of those options. Nathan Novemsky et al., Preference Fluency in Choice, 4 44 J. MARKETING RES. 347, 354 (2007). 5 These sources indicate that responses to Questions 5 and 6 may have been affected by 6 imperfect recall. However, this is not a fatal flaw of the survey methodology such that the 7 entire survey needs to be excluded. Potential issues with recall bias or imperfect recall go to the 8 weight of Simonson’s findings and are appropriate to bring up on cross-examination, or 9 through the introduction of other admissible evidence. See Medlock v. Taco Bell Corp., No. 1:07-cv-01314-SAB, 2015 WL 8479320, at *5 (E.D. Cal. Dec. 9, 2015) (Magistrate Judge 11 For the Northern District of California United States District Court 10 Stanley A. Boone); see also Classic Foods Intern. Corp. v. Kettle Foods, Inc., No. SACV 12 04–725 CJC (Ex), 2006 WL 5187497, at *7 (C.D. Cal. Mar. 2, 2006) (Judge Cormac J. Carney) 13 (noting that “no survey is perfect,” and “flaws in the survey may be elucidated on 14 cross-examination, so that the finder of fact can appropriately adjust the weight it gives 15 to the survey’s results”). 16 In general, many of Oracle’s objections to Simonson are to the effect that his survey 17 methodology was not optimal, or that its technical components were imperfect. However, 18 Oracle falls short of actually demonstrating unreliability sufficient to warrant exclusion under 19 Daubert. Most of the alleged deficiencies are of the sort that juries would properly consider in 20 assessing the probative value of a survey. They therefore go to the survey’s weight, not to its 21 admissibility. Southland Sod Farms v. Stover Seed Co., 108 F.3d 1134, 1143 (9th Cir. 1997) 22 (criticisms of a survey’s design, format, or limited scope went to its weight, not admissibility); 23 Prudential Ins. Co. of Am. v. Gibraltar Fin. Corp. of Cal., 694 F.2d 1150, 1156 (9th Cir. 1982) 24 (“Technical unreliability goes to the weight accorded a survey, not its admissibility.”); but see 25 Brighton Collectibles, Inc. v. RK Texas Leather Mfg., 923 F. Supp. 2d 1245, 1257, n.8 (S.D. 26 Cal. Feb. 12, 2013) (Judge Gonzalo P. Curiel) (Prudential’s broad statement must be construed 27 in light of Daubert and the court’s gatekeeping obligation). 28 13 1 5. INTERPRETATION OF SURVEY RESULTS. 2 Simonson’s survey found that 62% of respondents identified “User base/Market 3 share/Demand/Popularity/ROI” as the first consideration for developers in deciding whether 4 to develop for a particular platform (Simonson Rpt. ¶ 40). Simonson interpreted this result 5 to support his conclusions that “demand (or expected demand) and related economic 6 considerations (such as ROI)” are the primary factors in development decisions, while prior 7 familiarity with the programming language is a “less important, secondary” factor (ibid.). 8 Oracle points out, however, that programming language factors into the ROI because prior 9 familiarity with the language used lowers the “investment” cost to the developer of working with a new platform. Thus, Oracle contends, prior familiarity with the programming language 11 For the Northern District of California United States District Court 10 is in fact a “significant factor,” which contradicts Simonson’s opinion. 12 Again, it is unclear why Oracle’s argument compels the exclusion of Simonson’s survey 13 and opinion. Simonson does not deny that prior familiarity with the programming language is a 14 factor considered by developers, or that ROI is part of “User base/Market 15 share/Demand/Popularity/ROI.” He concluded only that, based on survey results, economic 16 considerations are relatively more important than prior familiarity with a programming 17 language (Simonson Rpt. ¶ 40). This is supported by survey results that although 62% of 18 respondents identified some form of “User base/Market share/Demand/Popularity/ROI” as their 19 primary consideration, only one respondent actually listed “ROI” as a primary consideration 20 (id., Exh. F, Table 4, at 4). How much prior familiarity with the programming language 21 contributes to ROI, and in turn to the decision to develop for a particular platform, is a factual 22 determination subject to competing interpretations. 23 Similarly, Oracle’s reliance on Kearl’s reaction to the survey is misplaced. Kearl said 24 he did not find the survey’s questions “particularly interesting” because “nobody would admit 25 that they would have a hard time learning something new,” ostensibly referring to the survey’s 26 questions on how easily developers could learn a new language (see id., Exh. E). None of his 27 comments actually challenged the survey’s relevance or reliability. These quotes from Kearl 28 provide no basis for exclusion. As cited by Oracle, they are essentially personal or ipse dixit 14 1 opinions, not expert conclusions or evidence. Even if they were expert opinions, they would be 2 properly raised by competing experts at trial, not as a basis for exclusion. 3 The parties may disagree as to the precise implications of the survey results, and of 4 course do disagree as to the greater issue of how much Google’s copying of the declaring code 5 and SSO of the 37 APIs factored into Android’s success. But these disagreements do not 6 suggest Simonson’s opinion is so unfounded as to be inadmissible. To the extent that Oracle 7 challenges Simonson’s conclusions, but not the survey methodology or results they are 8 reasonably based on, such critiques go to the weight of the survey rather than its admissibility. 9 See Clicks Billiards, Inc. v. Sixshooters, Inc., 251 F.3d 1252, 1265 (9th Cir. 2001) (critiques of a survey’s conclusions go to the survey’s weight rather than its admissibility). 11 For the Northern District of California United States District Court 10 6. LACK OF SURVEY CONTROL GROUP. 12 Oracle also contends that Simonson’s lack of a control group is fatal to the admissibility 13 of his survey. Oracle’s reasoning seems to be: Simonson purports to measure a kind of 14 “causation,” that is, how specific factors affect developers’ decisions; a survey that purports to 15 measure causation must include a proper control; therefore, Simonson’s survey needed a proper 16 control. Oracle cites Shari S. Diamond, Reference Guide on Survey Research, in REFERENCE 17 MANUAL ON SCI. EVIDENCE 359, 397–98 (3d ed. Fed. Jud. Ctr. 2011), as well as two of 18 Simonson’s previous reports, for the proposition that a survey that purports to measure 19 causation must include a control group. 20 The surveys contemplated by those sources, however, attempted to measure how the 21 introduction of a particular stimulus was causally linked to a particular outcome (e.g., how 22 publication of a particular advertisement may have caused consumer confusion). Diamond, 23 supra, at 397–98; Itamar Simonson Report at ¶ 45, Safe Auto Ins. Co. v. State Auto. Mut. Ins. 24 Co., No. 2:07-cv-1121 (S.D. Ohio Oct. 27, 2008); Itamar Simonson Report at ¶ 44, Larin 25 Corp. v. Alltrade, Inc., No. EDCV 06-1394 ODW (OPx) (C.D. Cal. Feb. 15, 2008). Under such 26 circumstances the produced outcome (e.g., consumer confusion) may have been caused by 27 preexisting conditions (e.g., preexisting consumer beliefs) rather than the tested stimulus, so it 28 15 1 makes sense to use a control group that has not been exposed to the stimulus as a baseline 2 against which to measure the stimulus’s effects. 3 However, a control group is not required for a survey that purports only to understand 4 what developers perceive as relatively more or less important factors in their decision-making 5 process (Simonson Dep. at 98–99). As Google points out, Simonson did not attempt to test the 6 effect of a stimulus, so there was nothing to control for. Oracle characterizes the absence of a 7 control group as a fatal flaw in Simonson’s survey, but does not explain what stimulus required 8 controlling, or why a “control group” was required under these circumstances. Rather, Oracle 9 vaguely asserts that without a control, Simonson “cannot determine if his survey results are accurate, or reflect confounding factors or a flawed survey design.” Oracle does not define or 11 For the Northern District of California United States District Court 10 otherwise clarify what it means by “confounding factors,” much less explain how such factors 12 necessitated a control group for the survey to be reliable. In short, Oracle has not successfully 13 challenged Simonson’s explanation that a control group was not required in this survey to 14 produce sufficiently reliable results. 15 7. INCLUSION OF PRETEST RESULTS. 16 After comparing results from both the pretest of 23 respondents and the full-scale 17 survey, Simonson decided to include results from the pretest in his final results (Simonson Dep. 18 at 181). Oracle contends this inclusion violated generally accepted standards for survey 19 research, because Simonson knew how the pretest results would affect his overall results, and 20 thus used the pretest to artificially alter the outcome of his survey. Oracle and Toubia cite Erin 21 Ruel et al. for the proposition that this “violates established survey practice” (Toubia Rpt. ¶ 62). 22 See ERIN RUEL ET AL., SURVEY RESEARCH: THEORY AND APPLICATIONS 117 (2016). Erin Ruel 23 et al. explain that if the survey is modified between the pretest and full test, as it was here, “data 24 collected in the pretest . . . could be inaccurate or biased compared to the results of the full-scale 25 study.” Ibid. They acknowledge that “it may be unreasonable to exclude [pretest] participants 26 from the entire study, especially in small-scale studies,” but add that under those circumstances, 27 “comparison and discussion of the differences between the pretested groups and the full-scale 28 group is necessary. It is also important to exercise caution when interpreting these results, and 16 1 it is important to note this potential data contamination as a possible limitation of the research.” 2 Ibid. 3 Google and Simonson’s counterargument that the pretest results agreed with the overall 4 results of the survey is beside the point. The issue is not whether the pretest results accorded 5 with the full-scale survey results, but whether both were achieved using uniform methodology 6 so as to produce reliably similar results. Google and Simonson do not dispute that the survey 7 was modified between the pretest and full-scale survey. Google’s characterization of these 8 modifications as “minor” and “cosmetic” is disingenuous. Simonson himself explained that one 9 question was changed because the pretest suggested it was misinterpreted by some respondents, and another entirely new question was added without explanation (see Simonson Rpt. ¶ 22). 11 For the Northern District of California United States District Court 10 These are hardly “cosmetic” changes. For example, pretest respondents who “misinterpreted” 12 the original Question 8 may have responded differently had they been asked the modified 13 Question 8 (Simonson Rpt., Exh. E). Or it may be, as Oracle suggested, that Simonson added a 14 new question because his initial screening questions were overbroad. At minimum, the new 15 question could raise concerns as to differences in scope or sample, and therefore reliability, 16 between the pretest and full-scale surveys. 17 Nonetheless, after conducting the pretest and modifying the survey questionnaire, 18 Simonson included the pretest results in his overall results without any comparison or 19 discussion of differences between the pretest and full-scale groups, or acknowledgment of how 20 this inclusion might have limited the survey’s reliability or conclusions. Moreover, as Oracle 21 points out, the specific results aside, the inclusion of 23 additional data points in the sample size 22 in and of itself bolsters the credibility of Simonson’s survey and its results are favorable to 23 Google. 24 It is no defense to say that Simonson’s decision to include pretest results was harmless 25 because those results were “very similar” to the full-scale survey results (see Oracle Exh. 26, 26 Simonson Dep. at 181). The point is that insofar as Simonson improperly authorized himself to 27 decide whether or not to include a particular set of data after he discovered how that data would 28 affect his overall results, his methodology was unreliable. 17 1 Any portion of Simonson’s survey or opinions based on pretest results is therefore 2 STRICKEN. Simonson may still refer to the survey’s size, statistical significance, or 3 respondents, but in doing so he must refer only to the full-scale survey, and he must modify any 4 specific numerical findings accordingly. 5 8. 6 Admissibility issues aside, Oracle contends Simonson should not be permitted to testify LATE SUBMISSION OF SIMONSON’S REPORT. 7 in Phase I because he submitted his report after the January 8, 2016 deadline for Google’s 8 expert reports on fair use. Excluding expert evidence is an “automatic” sanction for failure to 9 disclose information in a timely fashion unless the proffering party can show the violation is either substantially justified or harmless. Fed. R. Civ. P. 26(a)(2)(D), 37(c)(1); see also 11 For the Northern District of California United States District Court 10 Goodman v. Staples The Office Superstore, LLC, 644 F.3d 817, 827 (9th Cir. 2011); R & R 12 Sails, Inc. v. Ins. Co. of Pa., 673 F.3d 1240, 1246 (9th Cir. 2012). Google does not challenge 13 this contention in its opposition to Oracle’s motion. This is ultimately a moot issue, since 14 Google confirmed it did not intend to offer Simonson in its case-in-chief on fair use (Def.’s 15 Opp. to Pl.’s MIL No. 5 at 1 n.1). 16 17 CONCLUSION For the foregoing reasons, the Court GRANTED IN PART and DENIED IN PART Oracle’s 18 fifth motion in limine. As stated in the final pretrial order, Simonson must clarify that his 19 survey results indicate the motivations of developers in general, not the specific motivations of 20 Android developers within the 2007–09 timeframe. He may, however, attempt to explain why 21 and how his findings and conclusions are nonetheless probative of what motivated Android 22 developers in 2007–09, subject to cross-examination and the presentation of contrary evidence. 23 Simonson may not attempt to define or analyze specific factors like “popularity,” 24 “established user base,” or “market demand” in his survey results, insofar as those factors are 25 not specifically defined or analyzed in the survey questionnaire. He also may not opine as to 26 whether or how specific factors contribute to a platform’s overall success. He may, however, 27 testify as to his survey results to the extent that they indicate what factors developers in general 28 consider in deciding whether to develop for a particular platform. 18 1 Any portion of Simonson’s survey or opinions based on pretest results is STRICKEN. 2 Any references to the size of the survey, its statistical significance, or its respondents may be 3 based only on the full-scale survey and its results. 4 5 6 Dated: May 2, 2016. WILLIAM ALSUP UNITED STATES DISTRICT JUDGE 7 8 9 11 For the Northern District of California United States District Court 10 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 19

Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.