ABOUT ME

-

Today
-
Yesterday
-
Total
-
  • Scala IDE & Spark 연동
    빅데이터 2019. 8. 29. 01:04

    1. Scala IDE 설치

         1 ) Scala IDE 

              주소 : http://scala-ide.org/download/sdk.html

         

         2 )  Download IDE 

     

     

    2. Maven 으로 Scala 와 Spark 실행

     

         1 ) Flle - New - Scala Project 

     

     

         2 ) Project name : sparkSample  - Next >

     

     

         3 ) Libraries Tab - Scala Library container - Edit ...

     

     

         4 ) Fixed Scala Library container : 2.11.8 - Finish

     

     

     

         5 ) 스칼라 라이브러리 컨테이너가 2.11.8 로 바뀐 것을 확인 후 Finish

     

     

     

         6 ) sparkSample 프로젝트에 마우스 오른쪽 클릭 후  Configure - Convert to Maven Project 

     

     

     

         7 ) Create new POM - 

              Group id : sparkSample

              Artifact id : sparkSample

     

     

     

     

         8 ) pom.xml

    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

      <modelVersion>4.0.0</modelVersion>

      <groupId>sparkSample</groupId>

      <artifactId>sparkSample</artifactId>

      <version>0.0.1-SNAPSHOT</version>

      <build>

        <sourceDirectory>src</sourceDirectory>

        <plugins>

          <plugin>

            <artifactId>maven-compiler-plugin</artifactId>

            <version>3.5.1</version>

            <configuration>

              <source>1.8</source>

              <target>1.8</target>

            </configuration>

          </plugin>

        </plugins>

      </build>

      <dependencies>

      <dependency>

      <groupId>org.apache.spark</groupId>

      <artifactId>spark-core_2.11</artifactId>

      <version>1.3.0</version>

      </dependency>

      </dependencies>

    </project>

     

     

     

         9 )  src - New - Scala Object

     

     

     

     

         10 ) Create New File - Name : SparkSample01

     

     

     

     

         11 ) SparkSample01,scala  

       

    import org.apache.spark._

    object SparkSample01 {

      def main(args: Array[String]) {

      

        val conf =new SparkConf().setMaster("local").setAppName("My App")

        val sc = new SparkContext(conf)

       

        val lines = sc.textFile("/Users/xiilab827a/dev/spark-2.1.1-bin-hadoop2.7/README.md")

        println(lines.count())

        println(lines.first())

       

      

      }

    }

     

     

    Run As  > 1 Scala Application 

     

    결과 :

     

    *컴파일 에러가 날 때, 아래 주소 참조. 

    http://blog.naver.com/PostView.nhn?blogId=ljpark6&logNo=220895741316&parentCategoryNo=&categoryNo=55&viewDate=&isShowPopularPosts=true&from=search

     

     

    출처 : http://learningapachespark.blogspot.kr/2015/03/12-how-to-run-spark-with-eclipse-and.html

    '빅데이터' 카테고리의 다른 글

    phoenix 실행  (0) 2019.08.29
    SandBox in Oracle VirtualBox  (0) 2019.08.29
    Sqoop  (0) 2019.08.29
    Kylo sandbox 설치  (0) 2019.08.29
Designed by Tistory.