FailedConsole Output

Skipping 23,097 KB.. Full Log
CWJ6SCJEgYhn6zEskT9nMS8dP3gkqLMvHTriiIGKaihyfl5xfk5qXrOEBpkDgMEMDIxMFQUlDDI2RQXJOYpFJdU5qTaKoEttlJQNjBwdjEwsFayAwAsE8VZpQAAAA==- 2.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (116 milliseconds)
[info] - 2.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (291 milliseconds)
[info] - 2.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (466 milliseconds)
[info] - 2.1: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (440 milliseconds)
11:27:33.743 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
11:27:42.295 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
11:27:42.295 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.25
11:27:42.319 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - 2.1: create client with sharesHadoopClasses = false (11 seconds, 971 milliseconds)
[info] HivePartitionFilteringSuite(2.2):
11:27:53.937 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
11:27:53.937 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.25
11:27:53.958 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
11:27:55.316 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:28:20.237 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
11:28:20.237 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.25
11:28:22.038 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:28:38.986 WARN org.apache.spark.sql.hive.client.Shim_v2_2: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:811)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:157)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:59)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:59)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3030)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2582)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:176)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2963)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2947)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2772)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:2965)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2704)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy356.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:4821)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy357.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1228)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy358.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2577)
	... 68 more
[info] - 2.2: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (27 seconds, 468 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds<=>20170101 (285 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 (268 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (418 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk='aa' (250 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (121 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (103 milliseconds)
[info] - 2.2: getPartitionsByFilter: 20170101=ds (153 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 and h=2 (319 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (94 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 or ds=20170102 (259 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (196 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (118 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (132 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (81 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (238 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (69 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (339 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (344 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (386 milliseconds)
11:28:46.617 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
11:28:55.179 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
11:28:55.179 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.25
11:28:55.200 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - 2.2: create client with sharesHadoopClasses = false (12 seconds, 859 milliseconds)
[info] HivePartitionFilteringSuite(2.3):
11:29:00.567 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:29:00.568 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
11:29:14.790 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
11:29:14.790 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@192.168.10.25
11:29:14.835 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
11:29:16.034 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:29:16.254 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
11:29:16.254 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:29:16.255 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
11:29:38.288 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:29:38.289 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
11:29:42.975 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
11:29:42.976 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@192.168.10.25
11:29:44.259 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:29:44.424 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
11:29:44.424 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:29:44.425 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
11:30:08.746 WARN org.apache.spark.sql.hive.client.Shim_v2_3: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:811)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:157)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:59)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:59)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3315)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2768)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:182)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:3248)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:3232)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2974)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3250)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2906)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy385.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5093)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy386.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1232)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy387.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2679)
	... 68 more
[info] - 2.3: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (35 seconds, 845 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds<=>20170101 (481 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 (357 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (260 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk='aa' (272 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (124 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (141 milliseconds)
[info] - 2.3: getPartitionsByFilter: 20170101=ds (122 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 and h=2 (236 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (87 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 or ds=20170102 (259 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (182 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (108 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (93 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (115 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (179 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (81 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (375 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (388 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (413 milliseconds)
11:30:15.518 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[info] - 2.3: create client with sharesHadoopClasses = false (2 seconds, 107 milliseconds)
[info] HivePartitionFilteringSuite(3.0):
Hive Session ID = 3fbb742e-c151-4c7c-b2cc-b545c1ea6ffe
11:30:19.164 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:30:20.126 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:30:25.686 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:30:27.090 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.091 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.092 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.092 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.093 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.093 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.840 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.841 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.841 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.842 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.842 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:27.842 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:32.550 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.0.0
11:30:32.550 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.0.0, comment = Set by MetaStore jenkins@192.168.10.25
11:30:32.850 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database hive.default, returning NoSuchObjectException
11:30:33.724 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:30:33.729 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:30:33.864 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
Hive Session ID = fb05c998-7eac-4667-8d20-a25e32138b5c
11:30:38.299 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:30:39.101 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:30:40.662 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:30:42.140 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.142 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.143 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.143 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.144 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.144 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.966 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.967 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.967 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.970 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.971 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:42.971 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:30:45.452 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.0.0
11:30:45.452 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.0.0, comment = Set by MetaStore jenkins@192.168.10.25
11:30:51.512 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:30:51.517 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:30:51.716 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:30:54.106 WARN org.apache.spark.sql.hive.client.Shim_v3_0: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:811)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:157)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:59)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:59)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:437)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:355)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:277)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:581)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3841)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:3287)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$600(ObjectStore.java:244)
	at org.apache.hadoop.hive.metastore.ObjectStore$8.getJdoResult(ObjectStore.java:3771)
	at org.apache.hadoop.hive.metastore.ObjectStore$8.getJdoResult(ObjectStore.java:3755)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3498)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3773)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:3428)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy411.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5766)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at com.sun.proxy.$Proxy413.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1433)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1427)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
	at com.sun.proxy.$Proxy414.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:3061)
	... 68 more
[info] - 3.0: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (19 seconds, 56 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds<=>20170101 (394 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 (402 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (237 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk='aa' (213 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (92 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (85 milliseconds)
[info] - 3.0: getPartitionsByFilter: 20170101=ds (142 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 and h=2 (254 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (80 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 or ds=20170102 (291 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (151 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (68 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (79 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (75 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (231 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (120 milliseconds)
[info] - 3.0: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (200 milliseconds)
[info] - 3.0: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (232 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (333 milliseconds)
11:30:59.615 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive Session ID = a8e5a1a3-af29-4ae7-b73c-a3ec1f5c6f44
[info] - 3.0: create client with sharesHadoopClasses = false (2 seconds, 904 milliseconds)
[info] HivePartitionFilteringSuite(3.1):
Hive Session ID = 941bf995-0edf-490f-9945-74608daa4f3e
11:31:03.541 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:31:04.333 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:31:06.522 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:31:07.918 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:07.920 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:07.921 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:07.921 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:07.922 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:07.922 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:08.839 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:08.847 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:08.850 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:08.850 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:08.850 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:08.851 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:13.473 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0
11:31:13.473 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore jenkins@192.168.10.25
11:31:13.754 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database hive.default, returning NoSuchObjectException
11:31:14.699 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:31:14.704 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:31:14.843 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
Hive Session ID = a839d875-5f66-401f-804f-351eb3dc2a22
11:31:18.446 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:31:19.570 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:31:21.648 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
11:31:22.904 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:22.906 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:22.906 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:22.907 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:22.908 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:22.908 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:23.815 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:23.815 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:23.816 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:23.816 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:23.816 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:23.816 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
11:31:25.342 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0
11:31:25.343 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore jenkins@192.168.10.25
11:31:28.164 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
11:31:28.171 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:31:28.303 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
11:31:32.785 WARN org.apache.spark.sql.hive.client.Shim_v3_1: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:811)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:157)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:59)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:59)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:59)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:437)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:355)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:277)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:581)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3929)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:3375)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$800(ObjectStore.java:247)
	at org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:3859)
	at org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:3843)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3586)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3861)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:3516)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy437.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5883)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at com.sun.proxy.$Proxy439.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1444)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1438)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
	at com.sun.proxy.$Proxy440.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:3139)
	... 68 more
[info] - 3.1: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (17 seconds, 337 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds<=>20170101 (335 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 (362 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (228 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk='aa' (217 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (87 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (76 milliseconds)
[info] - 3.1: getPartitionsByFilter: 20170101=ds (130 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 and h=2 (283 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (110 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 or ds=20170102 (235 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (151 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (83 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (975 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (81 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (138 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (52 milliseconds)
[info] - 3.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (206 milliseconds)
[info] - 3.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (293 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (1 second, 64 milliseconds)
11:31:39.646 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive Session ID = 7131f1cf-bcc1-4e8d-bec3-1bb500655c3a
[info] - 3.1: create client with sharesHadoopClasses = false (1 second, 556 milliseconds)
[info] ErrorPositionSuite:
[info] - ambiguous attribute reference 1 (38 milliseconds)
[info] - ambiguous attribute reference 2 (2 milliseconds)
[info] - ambiguous attribute reference 3 (6 milliseconds)
11:31:41.661 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/src specified for non-external table:src
[info] - unresolved attribute 1 (992 milliseconds)
[info] - unresolved attribute 2 (8 milliseconds)
[info] - unresolved attribute 3 (13 milliseconds)
[info] - unresolved attribute 4 (4 milliseconds)
[info] - unresolved attribute 5 (3 milliseconds)
[info] - unresolved attribute 6 (7 milliseconds)
[info] - unresolved attribute 7 (17 milliseconds)
[info] - multi-char unresolved attribute (11 milliseconds)
[info] - unresolved attribute group by (12 milliseconds)
[info] - unresolved attribute order by (10 milliseconds)
[info] - unresolved attribute where (6 milliseconds)
[info] - unresolved attribute backticks (5 milliseconds)
[info] - parse error (44 milliseconds)
[info] - bad relation (10 milliseconds)
[info] - other expressions !!! IGNORED !!!
11:31:44.989 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/src does not exist; Force to delete it.
11:31:44.997 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/src
11:31:45.199 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
11:31:45.199 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:31:45.199 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
11:31:45.390 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
11:31:45.390 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:31:45.390 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveShowCreateTableSuite:
11:31:45.477 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:46.143 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with user specified schema (1 second, 43 milliseconds)
11:31:47.145 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:47.452 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table CTAS (1 second, 295 milliseconds)
11:31:48.357 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:49.313 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned data source table (1 second, 940 milliseconds)
11:31:50.263 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:50.508 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - bucketed data source table (1 second, 103 milliseconds)
11:31:51.369 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:51.986 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned bucketed data source table (1 second, 421 milliseconds)
11:31:52.515 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:52.785 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with a comment (848 milliseconds)
11:31:53.381 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:53.840 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with table properties (1 second, 49 milliseconds)
[info] - data source table using Dataset API (1 second, 745 milliseconds)
[info] - temp view (41 milliseconds)
11:31:55.942 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
11:31:56.409 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - SPARK-24911: keep quotes for nested fields (720 milliseconds)
[info] - view (857 milliseconds)
[info] - view with output columns (677 milliseconds)
[info] - view with table comment and properties (724 milliseconds)
11:31:58.914 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
11:31:59.279 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - simple hive table (607 milliseconds)
[info] - simple external hive table (1 second, 48 milliseconds)
11:32:00.576 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
11:32:00.835 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - partitioned hive table (708 milliseconds)
11:32:01.286 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
11:32:01.645 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive table with explicit storage info (639 milliseconds)
11:32:02.021 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
11:32:02.569 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause (878 milliseconds)
11:32:02.798 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
11:32:03.096 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive table with serde info (658 milliseconds)
11:32:03.460 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
11:32:04.264 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive bucketing is supported (1 second, 193 milliseconds)
11:32:04.652 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive partitioned view is not supported (592 milliseconds)
11:32:05.262 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
11:32:05.603 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - SPARK-24911: keep quotes for nested fields in hive (611 milliseconds)
11:32:05.856 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - simple hive table in Spark DDL (782 milliseconds)
[info] - show create table as serde can't work on data source table (526 milliseconds)
[info] - simple external hive table in Spark DDL (800 milliseconds)
11:32:07.968 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause in Spark DDL (1 second, 196 milliseconds)
11:32:09.164 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive table with nested fields with STORED AS clause in Spark DDL (814 milliseconds)
11:32:09.980 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive table with unsupported fileformat in Spark DDL (334 milliseconds)
11:32:10.313 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - hive table with serde info in Spark DDL (860 milliseconds)
11:32:11.219 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - partitioned, bucketed hive table in Spark DDL (1 second, 616 milliseconds)
11:32:12.796 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-fe488dab-1884-4f75-aa35-4461632c9cd6/t1 specified for non-external table:t1
[info] - show create table for transactional hive table (383 milliseconds)
11:32:13.342 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
11:32:13.342 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:32:13.343 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
11:32:13.482 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
11:32:13.482 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
11:32:13.482 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] BigDataBenchmarkSuite:
[info] - No data files found for BigDataBenchmark tests. !!! IGNORED !!!
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 3.533s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
11:32:18.911 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 1 total, 2.174s
[info] ScalaTest
[info] Run completed in 2 hours, 55 minutes, 57 seconds.
[info] Total number of tests run: 2766
[info] Suites: completed 132, aborted 0
[info] Tests: succeeded 2766, failed 0, canceled 0, ignored 597, pending 0
[info] All tests passed.
[info] Passed: Total 2769, Failed 0, Errors 0, Passed 2769, Ignored 597
[error] (sql-kafka-0-10/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 10621 s, completed Aug 3, 2020 11:33:24 AM
[error] running /home/jenkins/workspace/NewSparkPullRequestBuilder@2/build/sbt -Phadoop-2.7 -Phive-2.3 -Pspark-ganglia-lgpl -Pmesos -Pyarn -Pkinesis-asl -Phive -Phive-thriftserver -Pkubernetes -Phadoop-cloud -Dtest.exclude.tags=org.apache.spark.tags.ExtendedHiveTest,org.apache.spark.tags.ExtendedYarnTest test ; received return code 1
Attempting to post to Github...
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE