Console Output

Skipping 26,202 KB.. Full Log
[info] - 2.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (164 milliseconds)
[info] - 2.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (154 milliseconds)
[info] - 2.1: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (217 milliseconds)
05:57:46.295 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
05:57:51.821 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
05:57:51.821 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.23
05:57:51.833 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - 2.1: create client with sharesHadoopClasses = false (7 seconds, 494 milliseconds)
[info] HivePartitionFilteringSuite(2.2):
05:57:58.713 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
05:57:58.713 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.23
05:57:58.726 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
05:57:59.451 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
05:58:16.431 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
05:58:16.431 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.23
05:58:17.407 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
05:58:32.628 WARN org.apache.spark.sql.hive.client.Shim_v2_2: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3030)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2582)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:176)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2963)
	at org.apache.hadoop.hive.metastore.ObjectStore$6.getJdoResult(ObjectStore.java:2947)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2772)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:2965)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2704)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy361.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:4821)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy362.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1228)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy363.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2577)
	... 68 more
[info] - 2.2: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (21 seconds, 897 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds<=>20170101 (273 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 (250 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (184 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk='aa' (156 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (77 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (61 milliseconds)
[info] - 2.2: getPartitionsByFilter: 20170101=ds (100 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 and h=2 (185 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (62 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds=20170101 or ds=20170102 (138 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (96 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (55 milliseconds)
[info] - 2.2: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (128 milliseconds)
[info] - 2.2: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (57 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (142 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (59 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (193 milliseconds)
[info] - 2.2: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (154 milliseconds)
[info] - 2.2: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (214 milliseconds)
05:58:37.403 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
05:58:42.748 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.1.0
05:58:42.748 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.1.0, comment = Set by MetaStore jenkins@192.168.10.23
05:58:42.760 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - 2.2: create client with sharesHadoopClasses = false (7 seconds, 733 milliseconds)
[info] HivePartitionFilteringSuite(2.3):
05:58:46.978 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
05:58:46.978 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
05:59:06.457 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
05:59:06.457 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@192.168.10.23
05:59:06.476 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
05:59:07.113 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
05:59:07.233 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
05:59:07.233 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
05:59:07.233 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
05:59:25.678 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
05:59:25.678 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
05:59:28.642 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
05:59:28.642 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@192.168.10.23
05:59:29.483 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
05:59:29.592 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
05:59:29.592 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
05:59:29.593 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
05:59:44.502 WARN org.apache.spark.sql.hive.client.Shim_v2_3: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:439)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:356)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:278)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:583)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3315)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:2768)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$500(ObjectStore.java:182)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:3248)
	at org.apache.hadoop.hive.metastore.ObjectStore$7.getJdoResult(ObjectStore.java:3232)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2974)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3250)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2906)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
	at com.sun.proxy.$Proxy390.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5093)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy391.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1232)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:173)
	at com.sun.proxy.$Proxy392.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:2679)
	... 68 more
[info] - 2.3: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (21 seconds, 141 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds<=>20170101 (285 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 (225 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (169 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk='aa' (191 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (86 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (65 milliseconds)
[info] - 2.3: getPartitionsByFilter: 20170101=ds (103 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 and h=2 (227 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (63 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds=20170101 or ds=20170102 (158 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (107 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (57 milliseconds)
[info] - 2.3: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (61 milliseconds)
[info] - 2.3: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (55 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (133 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (53 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (174 milliseconds)
[info] - 2.3: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (161 milliseconds)
[info] - 2.3: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (234 milliseconds)
05:59:48.676 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[info] - 2.3: create client with sharesHadoopClasses = false (1 second, 295 milliseconds)
[info] HivePartitionFilteringSuite(3.0):
Hive Session ID = 024c81d8-c965-4227-b5e7-238f57482170
05:59:50.958 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
05:59:51.715 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
05:59:52.881 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
05:59:53.829 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:53.830 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:53.831 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:53.831 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:53.832 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:53.832 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:54.420 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:54.420 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:54.421 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:54.421 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:54.422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:54.422 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
05:59:57.543 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.0.0
05:59:57.543 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.0.0, comment = Set by MetaStore jenkins@192.168.10.23
05:59:57.737 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database hive.default, returning NoSuchObjectException
05:59:58.343 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
05:59:58.347 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
05:59:58.472 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
Hive Session ID = 7edc66d6-d9b7-4ffb-a130-f55c9eae5024
06:00:01.920 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:02.547 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:00:03.697 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:00:04.643 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:04.644 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:04.645 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:04.645 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:04.646 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:04.646 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:05.244 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:05.245 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:05.246 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:05.246 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:05.246 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:05.247 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:06.340 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.0.0
06:00:06.341 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.0.0, comment = Set by MetaStore jenkins@192.168.10.23
06:00:08.380 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:00:08.385 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:08.482 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:09.984 WARN org.apache.spark.sql.hive.client.Shim_v3_0: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:437)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:355)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:277)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:581)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3841)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:3287)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$600(ObjectStore.java:244)
	at org.apache.hadoop.hive.metastore.ObjectStore$8.getJdoResult(ObjectStore.java:3771)
	at org.apache.hadoop.hive.metastore.ObjectStore$8.getJdoResult(ObjectStore.java:3755)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3498)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3773)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:3428)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy416.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5766)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at com.sun.proxy.$Proxy418.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1433)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1427)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
	at com.sun.proxy.$Proxy419.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:3061)
	... 68 more
[info] - 3.0: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (10 seconds, 835 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds<=>20170101 (210 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 (214 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (158 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk='aa' (147 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (63 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (63 milliseconds)
[info] - 3.0: getPartitionsByFilter: 20170101=ds (103 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 and h=2 (190 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (61 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds=20170101 or ds=20170102 (174 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (116 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (62 milliseconds)
[info] - 3.0: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (63 milliseconds)
[info] - 3.0: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (63 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (121 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (54 milliseconds)
[info] - 3.0: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (174 milliseconds)
[info] - 3.0: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (165 milliseconds)
[info] - 3.0: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (245 milliseconds)
06:00:13.837 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive Session ID = 6517c557-e036-4aad-b929-c62eb6411269
[info] - 3.0: create client with sharesHadoopClasses = false (1 second, 488 milliseconds)
[info] HivePartitionFilteringSuite(3.1):
Hive Session ID = d834fd03-90eb-4854-857a-12627cbeb034
06:00:16.085 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:16.731 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:00:18.127 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:00:18.964 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:18.965 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:18.966 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:18.967 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:18.967 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:18.968 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:19.548 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:19.549 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:19.550 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:19.550 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:19.550 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:19.550 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:22.670 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0
06:00:22.670 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore jenkins@192.168.10.23
06:00:22.864 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database hive.default, returning NoSuchObjectException
06:00:23.415 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:00:23.420 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:23.525 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
Hive Session ID = f9d7a6ed-4181-4cce-87a1-ce8afc9c2281
06:00:26.533 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:27.100 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:00:28.169 WARN com.zaxxer.hikari.util.DriverDataSource: Registered driver with driverClassName=org.apache.derby.jdbc.EmbeddedDriver was not found, trying direct instantiation.
06:00:29.094 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.095 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.096 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.096 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.096 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.096 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.704 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.705 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.706 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.706 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.706 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:29.706 WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
06:00:31.045 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. metastore.schema.verification is not enabled so recording the schema version 3.1.0
06:00:31.045 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 3.1.0, comment = Set by MetaStore jenkins@192.168.10.23
06:00:32.174 WARN org.apache.hadoop.hive.ql.session.SessionState: METASTORE_FILTER_HOOK will be ignored, since hive.security.authorization.manager is set to instance of HiveAuthorizerFactory.
06:00:32.180 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:32.285 WARN org.apache.hadoop.hive.metastore.ObjectStore: datanucleus.autoStartMechanismMode is set to unsupported value null . Setting it to value: ignored
06:00:34.544 WARN org.apache.spark.sql.hive.client.Shim_v3_1: Caught Hive MetaException attempting to get partition metadata by filter from Hive. Falling back to fetching all partition metadata, which will degrade performance. Modifying your Hive metastore configuration to set hive.metastore.try.direct.sql to true may resolve this problem.
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:810)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitionsByFilter$1(HiveClientImpl.scala:745)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:294)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:276)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:743)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuite.$anonfun$new$1(HivePartitionFilteringSuite.scala:105)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:151)
	at org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
	at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:286)
	at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:58)
	at org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:393)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:381)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:376)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:458)
	at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite.run(Suite.scala:1124)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:518)
	at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:58)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:58)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1187)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1234)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1232)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1166)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.runNestedSuites(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.Suite.run(Suite.scala:1121)
	at org.scalatest.Suite.run$(Suite.scala:1106)
	at org.apache.spark.sql.hive.client.HivePartitionFilteringSuites.run(HivePartitionFilteringSuites.scala:24)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:317)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:510)
	at sbt.ForkMain$Run$2.call(ForkMain.java:296)
	at sbt.ForkMain$Run$2.call(ForkMain.java:286)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: MetaException(message:Filtering is supported only on partition keys of type string)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$FilterBuilder.setError(ExpressionTree.java:184)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.getJdoFilterPushdownParam(ExpressionTree.java:437)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilterOverPartitions(ExpressionTree.java:355)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree$LeafNode.generateJDOFilter(ExpressionTree.java:277)
	at org.apache.hadoop.hive.metastore.parser.ExpressionTree.generateJDOFilterFragment(ExpressionTree.java:581)
	at org.apache.hadoop.hive.metastore.ObjectStore.makeQueryFilterString(ObjectStore.java:3929)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsViaOrmFilter(ObjectStore.java:3375)
	at org.apache.hadoop.hive.metastore.ObjectStore.access$800(ObjectStore.java:247)
	at org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:3859)
	at org.apache.hadoop.hive.metastore.ObjectStore$11.getJdoResult(ObjectStore.java:3843)
	at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3586)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3861)
	at org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:3516)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
	at com.sun.proxy.$Proxy442.getPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5883)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
	at com.sun.proxy.$Proxy444.get_partitions_by_filter(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1444)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(HiveMetaStoreClient.java:1438)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
	at com.sun.proxy.$Proxy445.listPartitionsByFilter(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByFilter(Hive.java:3139)
	... 68 more
[info] - 3.1: getPartitionsByFilter returns all partitions when hive.metastore.try.direct.sql=false (10 seconds, 415 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds<=>20170101 (221 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 (239 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=(20170101 + 1) and h=0 (168 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk='aa' (158 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(chunk as int)=1 (not a valid partition predicate) (63 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(chunk as boolean)=true (not a valid partition predicate) (63 milliseconds)
[info] - 3.1: getPartitionsByFilter: 20170101=ds (104 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 and h=2 (194 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long)=20170101L and h=2 (62 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds=20170101 or ds=20170102 (154 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds in (20170102, 20170103) (using IN expression) (99 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using IN expression) (56 milliseconds)
[info] - 3.1: getPartitionsByFilter: ds in (20170102, 20170103) (using INSET expression) (59 milliseconds)
[info] - 3.1: getPartitionsByFilter: cast(ds as long) in (20170102L, 20170103L) (using INSET expression) (55 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using IN expression) (115 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') (using INSET expression) (48 milliseconds)
[info] - 3.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<2) (162 milliseconds)
[info] - 3.1: getPartitionsByFilter: (ds=20170101 and h>=2) or (ds=20170102 and h<(1+1)) (172 milliseconds)
[info] - 3.1: getPartitionsByFilter: chunk in ('ab', 'ba') and ((ds=20170101 and h>=2) or (ds=20170102 and h<2)) (266 milliseconds)
06:00:38.251 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hive Session ID = df62c8af-dd64-40d0-836c-969b9e0f7730
[info] - 3.1: create client with sharesHadoopClasses = false (1 second, 49 milliseconds)
[info] ErrorPositionSuite:
[info] - ambiguous attribute reference 1 (20 milliseconds)
[info] - ambiguous attribute reference 2 (6 milliseconds)
[info] - ambiguous attribute reference 3 (2 milliseconds)
06:00:39.413 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/src specified for non-external table:src
[info] - unresolved attribute 1 (555 milliseconds)
[info] - unresolved attribute 2 (3 milliseconds)
[info] - unresolved attribute 3 (4 milliseconds)
[info] - unresolved attribute 4 (4 milliseconds)
[info] - unresolved attribute 5 (5 milliseconds)
[info] - unresolved attribute 6 (9 milliseconds)
[info] - unresolved attribute 7 (12 milliseconds)
[info] - multi-char unresolved attribute (5 milliseconds)
[info] - unresolved attribute group by (12 milliseconds)
[info] - unresolved attribute order by (9 milliseconds)
[info] - unresolved attribute where (7 milliseconds)
[info] - unresolved attribute backticks (5 milliseconds)
[info] - parse error (21 milliseconds)
[info] - bad relation (5 milliseconds)
[info] - other expressions !!! IGNORED !!!
06:00:41.622 WARN org.apache.hadoop.hive.common.FileUtils: File file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/src does not exist; Force to delete it.
06:00:41.622 ERROR org.apache.hadoop.hive.common.FileUtils: Failed to delete file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/src
06:00:41.787 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:00:41.787 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:00:41.792 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:00:41.926 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:00:41.926 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:00:41.927 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] HiveShowCreateTableSuite:
06:00:41.982 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:42.410 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with user specified schema (760 milliseconds)
06:00:43.075 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:43.445 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table CTAS (1 second, 19 milliseconds)
06:00:44.111 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:44.649 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned data source table (1 second, 104 milliseconds)
06:00:45.241 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:45.398 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - bucketed data source table (759 milliseconds)
06:00:46.001 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:46.389 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - partitioned bucketed data source table (1 second, 77 milliseconds)
06:00:46.917 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:47.074 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with a comment (602 milliseconds)
06:00:47.555 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:47.717 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`ddl_test` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - data source table with table properties (662 milliseconds)
[info] - data source table using Dataset API (1 second, 417 milliseconds)
[info] - temp view (37 milliseconds)
06:00:49.417 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
06:00:49.694 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider json. Persisting data source table `default`.`t1` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] - SPARK-24911: keep quotes for nested fields (492 milliseconds)
[info] - view (508 milliseconds)
[info] - view with output columns (504 milliseconds)
[info] - view with table comment and properties (619 milliseconds)
06:00:51.534 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
06:00:51.977 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - simple hive table (629 milliseconds)
[info] - simple external hive table (1 second, 92 milliseconds)
06:00:53.265 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
06:00:53.481 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - partitioned hive table (424 milliseconds)
06:00:53.684 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
06:00:53.926 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive table with explicit storage info (439 milliseconds)
06:00:54.122 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
06:00:54.372 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause (477 milliseconds)
06:00:54.599 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
06:00:54.906 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive table with serde info (502 milliseconds)
06:00:55.102 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
06:00:55.363 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive bucketing is supported (505 milliseconds)
06:00:55.607 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive partitioned view is not supported (438 milliseconds)
06:00:56.048 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
06:00:56.336 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - SPARK-24911: keep quotes for nested fields in hive (511 milliseconds)
06:00:56.563 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - simple hive table in Spark DDL (467 milliseconds)
[info] - show create table as serde can't work on data source table (234 milliseconds)
[info] - simple external hive table in Spark DDL (331 milliseconds)
06:00:57.593 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive table with STORED AS clause in Spark DDL (458 milliseconds)
06:00:58.107 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive table with nested fields with STORED AS clause in Spark DDL (509 milliseconds)
06:00:58.559 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive table with unsupported fileformat in Spark DDL (235 milliseconds)
06:00:58.795 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - hive table with serde info in Spark DDL (1 second, 300 milliseconds)
06:01:00.096 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - partitioned, bucketed hive table in Spark DDL (479 milliseconds)
06:01:00.576 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/NewSparkPullRequestBuilder@2/target/tmp/warehouse-1e8d6221-ed02-4436-a6c1-e9f7ff467641/t1 specified for non-external table:t1
[info] - show create table for transactional hive table (1 second, 1 millisecond)
06:01:01.694 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:01:01.694 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:01:01.694 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
06:01:01.846 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
06:01:01.846 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.jdbc.timeout does not exist
06:01:01.846 WARN org.apache.hadoop.hive.conf.HiveConf: HiveConf of name hive.stats.retries.wait does not exist
[info] BigDataBenchmarkSuite:
[info] - No data files found for BigDataBenchmark tests. !!! IGNORED !!!
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.testUDAF started
[info] Test org.apache.spark.sql.hive.JavaDataFrameSuite.saveTableAndQueryIt started
[info] Test run finished: 0 failed, 0 ignored, 2 total, 6.928s
[info] Test run started
[info] Test org.apache.spark.sql.hive.JavaMetastoreDataSourcesSuite.saveTableAndQueryIt started
06:01:09.827 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Couldn't find corresponding Hive SerDe for data source provider org.apache.spark.sql.json. Persisting data source table `default`.`javasavedtable` into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive.
[info] Test run finished: 0 failed, 0 ignored, 1 total, 1.214s
[info] ScalaTest
[info] Run completed in 2 hours, 49 minutes, 6 seconds.
[info] Total number of tests run: 3637
[info] Suites: completed 131, aborted 0
[info] Tests: succeeded 3637, failed 0, canceled 0, ignored 598, pending 0
[info] All tests passed.
[info] Passed: Total 3640, Failed 0, Errors 0, Passed 3640, Ignored 598
[error] (streaming-kafka-0-10/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 10215 s, completed May 15, 2020 6:01:45 AM
[error] running /home/jenkins/workspace/NewSparkPullRequestBuilder@2/build/sbt -Phadoop-2.7 -Phive-2.3 -Phive -Pkubernetes -Pyarn -Phive-thriftserver -Pspark-ganglia-lgpl -Pkinesis-asl -Pmesos -Phadoop-cloud -Dtest.exclude.tags=org.apache.spark.tags.ExtendedYarnTest test ; received return code 1
Attempting to post to Github...
 > Post successful.
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE