{"id":374,"date":"2017-04-27T01:55:41","date_gmt":"2017-04-26T17:55:41","guid":{"rendered":"http:\/\/vinta.ws\/code\/?p=374"},"modified":"2026-02-18T01:20:36","modified_gmt":"2026-02-17T17:20:36","slug":"setup-spark-on-macos","status":"publish","type":"post","link":"https:\/\/vinta.ws\/code\/setup-spark-on-macos.html","title":{"rendered":"Setup Spark on macOS"},"content":{"rendered":"<h2>Install<\/h2>\n<p>First, you need Java 8 JDK.<br \/>\n<a href=\"http:\/\/www.oracle.com\/technetwork\/java\/javase\/downloads\/index.html\">http:\/\/www.oracle.com\/technetwork\/java\/javase\/downloads\/index.html<\/a><\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ java -version\njava version \"1.8.0_131\"<\/code><\/pre>\n<h3>Homebrew Version<\/h3>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ brew update\n$ brew install maven apache-spark<\/code><\/pre>\n<h3>Pre-built Version<\/h3>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ mkdir -p \/usr\/local\/share\/apache-spark &amp;&amp; \n  cd \/usr\/local\/share\/apache-spark &amp;&amp; \n  wget https:\/\/www.apache.org\/dyn\/closer.lua\/spark\/spark-2.2.0\/spark-2.2.0-bin-hadoop2.7.tgz &amp;&amp; \n  tar -xvzf spark-2.2.0-bin-hadoop2.7.tgz<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"http:\/\/spark.apache.org\/downloads.html\">http:\/\/spark.apache.org\/downloads.html<\/a><\/p>\n<h3>Build Version<\/h3>\n<p>This is the recommended way.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ brew install scala@2.11\n$ export PATH=\"\/usr\/local\/opt\/scala@2.11\/bin:$PATH\"\n$ scala -version\nScala code runner version 2.11.8 -- Copyright 2002-2016, LAMP\/EPFL\n\n$ mkdir -p \/usr\/local\/share\/apache-spark &amp;&amp; \n  cd \/usr\/local\/share\/apache-spark &amp;&amp; \n  wget https:\/\/d3kbcqa49mib13.cloudfront.net\/spark-2.2.0.tgz &amp;&amp; \n  tar -xvzf spark-2.2.0.tgz &amp;&amp; \n  cd spark-2.2.0\n\n$ .\/build\/mvn -Pnetlib-lgpl -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests -T 4C package\n# or\n$ .\/build\/mvn -Pnetlib-lgpl -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.3 -DskipTests clean package<\/code><\/pre>\n<pre class=\"line-numbers\"><code class=\"language-scala\">$ spark-shell --packages \"com.github.fommil.netlib:all:1.1.2\"\nscala&gt; import com.github.fommil.netlib.BLAS\nimport com.github.fommil.netlib.BLAS\nscala&gt; BLAS.getInstance().getClass().getName()\nres1: String = com.github.fommil.netlib.NativeSystemBLAS<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"http:\/\/spark.apache.org\/downloads.html\">http:\/\/spark.apache.org\/downloads.html<\/a><br \/>\n<a href=\"http:\/\/spark.apache.org\/docs\/latest\/building-spark.html\">http:\/\/spark.apache.org\/docs\/latest\/building-spark.html<\/a><br \/>\n<a href=\"http:\/\/spark.apache.org\/docs\/latest\/ml-guide.html#dependencies\">http:\/\/spark.apache.org\/docs\/latest\/ml-guide.html#dependencies<\/a><\/p>\n<h2>Configurations<\/h2>\n<p>in .zshrc<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">if which java &gt; \/dev\/null; then\n  export JAVA_HOME=\"$(\/usr\/libexec\/java_home -v 1.8)\"\n  export PATH=\"$JAVA_HOME\/bin:$PATH\"\nfi\n\nexport PATH=\"\/usr\/local\/opt\/scala@2.11\/bin:$PATH\"\n\n# homebrew version\nexport SPARK_HOME=\"\/usr\/local\/Cellar\/apache-spark\/2.2.0\/libexec\"\nexport PYTHONPATH=\"$SPARK_HOME\/python:$SPARK_HOME\/python\/lib\/py4j-0.10.4-src.zip:$PYTHONPATH\"\nexport PYSPARK_DRIVER_PYTHON=\"ipython\"\n\n# pre-built version\nexport SPARK_HOME=\"\/usr\/local\/share\/apache-spark\/spark-2.2.0-bin-hadoop2.7\"\nexport PATH=\"$SPARK_HOME\/bin:$PATH\"\nexport PYTHONPATH=\"$SPARK_HOME\/python:$SPARK_HOME\/python\/lib\/py4j-0.10.4-src.zip:$PYTHONPATH\"\n\n# build version\nexport SPARK_HOME=\"\/usr\/local\/share\/apache-spark\/spark-2.2.0\"\nexport PATH=\"$SPARK_HOME\/bin:$PATH\"\nexport PYTHONPATH=\"$SPARK_HOME\/python:$SPARK_HOME\/python\/lib\/py4j-0.10.4-src.zip:$PYTHONPATH\"<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/programming-guide.html\">https:\/\/spark.apache.org\/docs\/latest\/programming-guide.html<\/a><br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/configuration.html\">https:\/\/spark.apache.org\/docs\/latest\/configuration.html<\/a><\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ cd $SPARK_HOME\n\n$ cp conf\/spark-defaults.conf.template conf\/spark-defaults.conf\nspark.driver.memory              4g\nspark.executor.memory            4g\nspark.jars.packages              com.github.fommil.netlib:all:1.1.2,mysql:mysql-connector-java:5.1.41\nspark.serializer                 org.apache.spark.serializer.KryoSerializer\n\n$ cp conf\/spark-env.sh.template conf\/spark-env.sh\nexport PYTHONHASHSEED=42\n\n$ cp conf\/log4j.properties.template conf\/log4j.properties<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/configuration.html\">https:\/\/spark.apache.org\/docs\/latest\/configuration.html<\/a><\/p>\n<h2>Commands<\/h2>\n<h3>Local Mode<\/h3>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ spark-shell\n\n$ export PYSPARK_DRIVER_PYTHON=\"jupyter\" &amp;&amp; \nexport PYSPARK_DRIVER_PYTHON_OPTS=\"notebook --ip 0.0.0.0\" &amp;&amp; \npyspark \n--packages \"com.github.fommil.netlib:all:1.1.2,mysql:mysql-connector-java:5.1.41\" \n--driver-memory 4g \n--executor-memory 4g \n--master \"local[*]\"\n\n$ spark-shell \n--packages \"com.github.fommil.netlib:all:1.1.2,mysql:mysql-connector-java:5.1.41\"\n--master \"local-cluster[3, 1, 4096]\"\n\n# Spark Application UI on the driver\n$ open http:\/\/localhost:4040\/<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/programming-guide.html\">https:\/\/spark.apache.org\/docs\/latest\/programming-guide.html<\/a><\/p>\n<h3>Standalone mode<\/h3>\n<p>There are two deploy modes for Spark Standalone. In client mode, the driver is launched in the same process as the client that submits the application. In cluster mode, however, the driver is launched from one of the Worker.<\/p>\n<pre class=\"line-numbers\"><code class=\"language-bash\">$ .\/sbin\/start-master.sh -h localhost\n$ .\/sbin\/start-slave.sh spark:\/\/localhost:7077\n\n# Spark Web UI on the cluster manager\n$ open http:\/\/localhost:8080\/\n\n$ pyspark \n--driver-memory 4g \n--executor-memory 4g \n--master spark:\/\/localhost:7077\n\n$ spark-submit \n--master spark:\/\/localhost:7077 \nexamples\/src\/main\/python\/pi.py 10\n\n$ spark-submit \n--driver-memory 2g \n--driver-java-options \"-XX:ThreadStackSize=81920\" \n--total-executor-cores 3 \n--executor-cores 3 \n--executor-memory 12g \n--conf \"spark.executor.extraJavaOptions=-XX:ThreadStackSize=81920\" \n--master spark:\/\/localhost:7077 \n--packages \"mysql:mysql-connector-java:5.1.41,com.hankcs:hanlp:portable-1.3.4,edu.stanford.nlp:stanford-corenlp:3.7.0\" \n--jars \"\/Users\/vinta\/Projects\/albedo\/spark-data\/stanford-corenlp-3.8.0-models.jar\" \n--class ws.vinta.albedo.LogisticRegressionRanker \ntarget\/albedo-1.0.0-SNAPSHOT.jar\n\n# Spark Application UI on the driver\n$ open http:\/\/localhost:4040\/<\/code><\/pre>\n<p>ref:<br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/spark-standalone.html\">https:\/\/spark.apache.org\/docs\/latest\/spark-standalone.html<\/a><br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/submitting-applications.html\">https:\/\/spark.apache.org\/docs\/latest\/submitting-applications.html<\/a><br \/>\n<a href=\"https:\/\/spark.apache.org\/docs\/latest\/configuration.html\">https:\/\/spark.apache.org\/docs\/latest\/configuration.html<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>First, you need Java 8 JDK.<\/p>\n","protected":false},"author":1,"featured_media":375,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[97,112,4],"tags":[108,2,109],"class_list":["post-374","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-about-ai","category-about-big-data","category-about-python","tag-apache-spark","tag-python","tag-scala"],"_links":{"self":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/374","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/comments?post=374"}],"version-history":[{"count":0,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/posts\/374\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media\/375"}],"wp:attachment":[{"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/media?parent=374"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/categories?post=374"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/vinta.ws\/code\/wp-json\/wp\/v2\/tags?post=374"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}