{"id":414,"date":"2017-08-05T22:31:31","date_gmt":"2017-08-05T14:31:31","guid":{"rendered":"http:\/\/vinta.ws\/code\/?p=414"},"modified":"2026-03-17T01:19:15","modified_gmt":"2026-03-16T17:19:15","slug":"spark-troubleshooting","status":"publish","type":"post","link":"https:\/\/vinta.ws\/code\/spark-troubleshooting.html","title":{"rendered":"Spark troubleshooting"},"content":{"rendered":"<p>Apache Spark 2.x Troubleshooting Guide<br \/>\n<a href=\"https:\/\/www.slideshare.net\/jcmia1\/a-beginners-guide-on-troubleshooting-spark-applications\">https:\/\/www.slideshare.net\/jcmia1\/a-beginners-guide-on-troubleshooting-spark-applications<\/a><br \/>\n<a href=\"https:\/\/www.slideshare.net\/jcmia1\/apache-spark-20-tuning-guide\">https:\/\/www.slideshare.net\/jcmia1\/apache-spark-20-tuning-guide<\/a><\/p>\n<h2>Check your cluster UI to ensure that workers are registered and have sufficient resources<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-bash\">PYSPARK_DRIVER_PYTHON=\"jupyter\" \nPYSPARK_DRIVER_PYTHON_OPTS=\"notebook --ip 0.0.0.0\" \npyspark \n--packages \"org.xerial:sqlite-jdbc:3.16.1,com.github.fommil.netlib:all:1.1.2\" \n--driver-memory 4g \n--executor-memory 20g \n--master spark:\/\/TechnoCore.local:7077<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-java\">TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources<\/code><\/pre>\n<p>\u53ef\u80fd\u662f\u4f60\u6307\u5b9a\u7684 <code>--executor-memory<\/code> \u8d85\u904e\u4e86 worker \u7684 memory\u3002<\/p>\n<p>\u4f60\u53ef\u4ee5\u5728 Spark Master UI <a href=\"http:\/\/localhost:8080\/\">http:\/\/localhost:8080\/<\/a> \u770b\u5230\u5404\u500b worker \u7e3d\u5171\u6709\u591a\u5c11 memory \u53ef\u4ee5\u7528\u3002\u5982\u679c\u6bcf\u53f0 worker \u53ef\u4ee5\u7528\u7684 memory \u5bb9\u91cf\u4e0d\u540c\uff0cSpark \u5c31\u53ea\u6703\u9078\u64c7\u90a3\u4e9b memory \u5927\u65bc <code>--executor-memory<\/code> \u7684 workers\u3002<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/spoddutur.github.io\/spark-notes\/distribution_of_executors_cores_and_memory_for_spark_application\">https:\/\/spoddutur.github.io\/spark-notes\/distribution_of_executors_cores_and_memory_for_spark_application<\/a><\/p>\n<h2>SparkContext was shut down<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-scala\">ERROR Executor: Exception in task 1.0 in stage 6034.0 (TID 21592)\njava.lang.StackOverflowError\n...\nERROR LiveListenerBus: SparkListenerBus has already stopped! Dropping event SparkListenerJobEnd(55,1494185401195,JobFailed(org.apache.spark.SparkException: Job 55 cancelled because SparkContext was shut down))<\/code><\/pre>\n<p>\u53ef\u80fd\u662f executor \u7684\u8a18\u61b6\u9ad4\u4e0d\u5920\uff0c\u5c0e\u81f4 Out Of Memory (OOM) \u4e86\u3002<\/p>\n<p>ref:<br \/>\n<a href=\"http:\/\/stackoverflow.com\/questions\/32822948\/sparkcontext-was-shut-down-while-running-spark-on-a-large-dataset\">http:\/\/stackoverflow.com\/questions\/32822948\/sparkcontext-was-shut-down-while-running-spark-on-a-large-dataset<\/a><\/p>\n<h2>Container exited with a non-zero exit code 56 (or some other numbers)<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-scala\">WARN org.apache.spark.scheduler.cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Container marked as failed: container_1504241464590_0001_01_000002 on host: albedo-w-1.c.albedo-157516.internal. Exit status: 56. Diagnostics: Exception from container-launch.\nContainer id: container_1504241464590_0001_01_000002\nExit code: 56\nStack trace: ExitCodeException exitCode=56:\n    at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)\n    at org.apache.hadoop.util.Shell.run(Shell.java:869)\n    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)\n    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)\n    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)\n    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)\n    at java.util.concurrent.FutureTask.run(FutureTask.java:266)\n    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n    at java.lang.Thread.run(Thread.java:748)\n\nContainer exited with a non-zero exit code 56<\/code><\/pre>\n<p>\u53ef\u80fd\u662f executor \u7684\u8a18\u61b6\u9ad4\u4e0d\u5920\uff0c\u5c0e\u81f4 Out Of Memory (OOM) \u4e86\u3002<\/p>\n<p>ref:<br \/>\n<a href=\"http:\/\/stackoverflow.com\/questions\/39038460\/understanding-spark-container-failure\">http:\/\/stackoverflow.com\/questions\/39038460\/understanding-spark-container-failure<\/a><\/p>\n<h2>Exception in thread &quot;main&quot; java.lang.StackOverflowError<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-scala\">Exception in thread \"main\" java.lang.StackOverflowError\n    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)\n    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1495)\n    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)\n    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)\n    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)\n    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)\n    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)\n    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)\n    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)\n    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)\n    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)\n    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)\n    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)\n    at scala.collection.immutable.List$SerializationProxy.writeObject(List.scala:468)\n    at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)\n    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n    at java.lang.reflect.Method.invoke(Method.java:498)\n    ...<\/code><\/pre>\n<p>\u89e3\u6c7a\u8fa6\u6cd5\uff1a<\/p>\n<pre class=\"line-numbers\"><code class=\"language-scala\">import org.apache.spark.ml.recommendation.ALS\nimport org.apache.spark.sql.SparkSession\n\nval spark: SparkSession = SparkSession.builder().getOrCreate()\nval sc = spark.sparkContext\nsc.setCheckpointDir(\".\/spark-data\/checkpoint\")\n\n\/\/ \u56e0\u70ba sc.setCheckpointDir() \u5c31\u6703\u555f\u7528 checkpoint \u4e86\n\/\/ \u6240\u4ee5\u53ef\u4ee5\u4e0d\u7528\u7279\u5225\u6307\u5b9a checkpointInterval\nval als = new ALS()\n  .setCheckpointInterval(2)<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/stackoverflow.com\/questions\/31484460\/spark-gives-a-stackoverflowerror-when-training-using-als\">https:\/\/stackoverflow.com\/questions\/31484460\/spark-gives-a-stackoverflowerror-when-training-using-als<\/a><br \/>\n<a href=\"https:\/\/stackoverflow.com\/questions\/35127720\/what-is-the-difference-between-spark-checkpoint-and-persist-to-a-disk\">https:\/\/stackoverflow.com\/questions\/35127720\/what-is-the-difference-between-spark-checkpoint-and-persist-to-a-disk<\/a><\/p>\n<h2>Randomness of hash of string should be disabled via PYTHONHASHSEED<\/h2>\n<p>\u89e3\u6c7a\u8fa6\u6cd5\uff1a<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ cd $SPARK_HOME\n$ cp conf\/spark-env.sh.template conf\/spark-env.sh\n$ echo \"export PYTHONHASHSEED=42\" &gt;&gt; conf\/spark-env.sh<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/issues.apache.org\/jira\/browse\/SPARK-13330\">https:\/\/issues.apache.org\/jira\/browse\/SPARK-13330<\/a><\/p>\n<h2>It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforamtion<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-scala\">Exception: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.<\/code><\/pre>\n<p>\u56e0\u70ba <code>spark.sparkContext<\/code> \u53ea\u80fd\u5728 driver program \u88e1\u5b58\u53d6\uff0c\u4e0d\u80fd\u88ab worker \u5b58\u53d6\uff08\u4f8b\u5982\u90a3\u4e9b\u4e1f\u7d66 RDD \u57f7\u884c\u7684 lambda function \u6216\u662f UDF \u5c31\u662f\u5728 worker \u4e0a\u57f7\u884c\u7684\uff09\u3002<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/rdd-programming-guide.html#passing-functions-to-spark\">https:\/\/spark.apache.org\/docs\/latest\/rdd-programming-guide.html#passing-functions-to-spark<\/a><br \/>\n<a href=\"https:\/\/engineering.sharethrough.com\/blog\/2013\/09\/13\/top-3-troubleshooting-tips-to-keep-you-sparking\/\">https:\/\/engineering.sharethrough.com\/blog\/2013\/09\/13\/top-3-troubleshooting-tips-to-keep-you-sparking\/<\/a><\/p>\n<p>Spark automatically creates closures:<\/p>\n<ul>\n<li>for functions that run on RDDs at workers,<\/li>\n<li>and for any global variables that are used by those workers.<\/li>\n<\/ul>\n<p>One closure is sent per worker for every task. Closures are one way from the driver to the worker.<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/gerardnico.com\/wiki\/spark\/closure\">https:\/\/gerardnico.com\/wiki\/spark\/closure<\/a><\/p>\n<h2>Unable to find encoder for type stored in a Dataset<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-scala\">Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases. someDF.as[SomeCaseClass]<\/code><\/pre>\n<p>\u89e3\u6c7a\u8fa6\u6cd5\uff1a<\/p>\n<pre class=\"line-numbers\"><code class=\"language-scala\">import spark.implicits._\n\nyourDF.as[YourCaseClass]<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/stackoverflow.com\/questions\/38664972\/why-is-unable-to-find-encoder-for-type-stored-in-a-dataset-when-creating-a-dat\">https:\/\/stackoverflow.com\/questions\/38664972\/why-is-unable-to-find-encoder-for-type-stored-in-a-dataset-when-creating-a-dat<\/a><\/p>\n<h2>Task not serializable<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-scala\">Caused by: java.io.NotSerializableException: Settings\nSerialization stack:\n    - object not serializable (class: Settings, value: Settings@2dfe2f00)\n    - field (class: Settings$$anonfun$1, name: $outer, type: class Settings)\n    - object (class Settings$$anonfun$1, &lt;function1&gt;)<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-scala\">Caused by: org.apache.spark.SparkException:\n    Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:298)<\/code><\/pre>\n<p>\u901a\u5e38\u662f\u4f60\u5728 closure functions \u88e1\u4f7f\u7528\u4e86 driver program \u88e1\u7684\u67d0\u500b object\uff0c\u56e0\u70ba Spark \u6703\u81ea\u52d5 serialize \u90a3\u500b\u88ab\u5f15\u7528\u7684 object \u4e00\u8d77\u4e1f\u7d66 worker node \u57f7\u884c\uff0c\u6240\u4ee5\u5982\u679c\u90a3\u500b object \u6216\u662f class \u6c92\u8fa6\u6cd5\u88ab serialize\uff0c\u5c31\u6703\u51fa\u73fe\u9019\u500b\u932f\u8aa4\u3002<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/www.safaribooksonline.com\/library\/view\/spark-the-definitive\/9781491912201\/ch04.html#user-defined-functions\">https:\/\/www.safaribooksonline.com\/library\/view\/spark-the-definitive\/9781491912201\/ch04.html#user-defined-functions<\/a><br \/>\n<a href=\"http:\/\/www.puroguramingu.com\/2016\/02\/26\/spark-dos-donts.html\">http:\/\/www.puroguramingu.com\/2016\/02\/26\/spark-dos-donts.html<\/a><br \/>\n<a href=\"https:\/\/stackoverflow.com\/questions\/36176011\/spark-sql-udf-task-not-serialisable\">https:\/\/stackoverflow.com\/questions\/36176011\/spark-sql-udf-task-not-serialisable<\/a><br \/>\n<a href=\"https:\/\/stackoverflow.com\/questions\/22592811\/task-not-serializable-java-io-notserializableexception-when-calling-function-ou\">https:\/\/stackoverflow.com\/questions\/22592811\/task-not-serializable-java-io-notserializableexception-when-calling-function-ou<\/a><br \/>\n<a href=\"https:\/\/databricks.gitbooks.io\/databricks-spark-knowledge-base\/content\/troubleshooting\/javaionotserializableexception.html\">https:\/\/databricks.gitbooks.io\/databricks-spark-knowledge-base\/content\/troubleshooting\/javaionotserializableexception.html<\/a><br \/>\n<a href=\"https:\/\/mp.weixin.qq.com\/s\/BT6sXZlHcufAFLgTONCHsg\">https:\/\/mp.weixin.qq.com\/s\/BT6sXZlHcufAFLgTONCHsg<\/a><\/p>\n<p>\u5982\u679c\u4f60\u53ea\u6709\u5728 Databricks Notebook \u88e1\u9047\u5230\u9019\u500b\u932f\u8aa4\uff0c\u56e0\u70ba Notebook \u7684\u904b\u4f5c\u6a5f\u5236\u8ddf\u4e00\u822c\u7684 Spark application \u7a0d\u5fae\u6709\u9ede\u4e0d\u540c\uff0c\u4f60\u53ef\u4ee5\u8a66\u8a66 package cell\u3002<\/p>\n<p>ref:<br \/>\n<a href=\"https:\/\/docs.databricks.com\/user-guide\/notebooks\/package-cells.html\">https:\/\/docs.databricks.com\/user-guide\/notebooks\/package-cells.html<\/a><\/p>\n<h2>java.lang.IllegalStateException: Cannot find any build directories.<\/h2>\n<pre class=\"line-numbers\"><code class=\"language-scala\">java.lang.IllegalStateException: Cannot find any build directories.\n    at org.apache.spark.launcher.CommandBuilderUtils.checkState(CommandBuilderUtils.java:248)\n    at org.apache.spark.launcher.AbstractCommandBuilder.getScalaVersion(AbstractCommandBuilder.java:240)\n    at org.apache.spark.launcher.AbstractCommandBuilder.buildClassPath(AbstractCommandBuilder.java:194)\n    at org.apache.spark.launcher.AbstractCommandBuilder.buildJavaCommand(AbstractCommandBuilder.java:117)\n    at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:39)\n    at org.apache.spark.launcher.WorkerCommandBuilder.buildCommand(WorkerCommandBuilder.scala:45)\n    at org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:63)\n    at org.apache.spark.deploy.worker.CommandUtils$.buildProcessBuilder(CommandUtils.scala:51)\n    at org.apache.spark.deploy.worker.ExecutorRunner.org$apache$spark$deploy$worker$ExecutorRunner$$fetchAndRunExecutor(ExecutorRunner.scala:145)\n    at org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:73)<\/code><\/pre>\n<p>\u53ef\u80fd\u7684\u539f\u56e0\u662f\u6c92\u6709\u8a2d\u7f6e <code>SPARK_HOME<\/code> \u6216\u662f\u4f60\u7684 launch script \u6c92\u6709\u8b80\u5230\u8a72\u74b0\u5883\u8b8a\u6578\u3002<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Wild \"Task not serializable\" appeared!<\/p>\n","protected":false},"author":1,"featured_media":415,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[97,112,38],"tags":[108,29,2,109],"class_list":["post-414","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-about-ai","category-about-big-data","category-about-devops","tag-apache-spark","tag-debug","tag-python","tag-scala"],"_links":{"self":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/414","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/comments?post=414"}],"version-history":[{"count":0,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/414\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media\/415"}],"wp:attachment":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media?parent=414"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/categories?post=414"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/tags?post=414"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}