6 千山我独行_不需相送

尚未进行身份认证

暂无相关简介

等级
TA的排名 15w+

Mac 使用命令行解压 rar文件

1、首先安装brew> brewzsh: command not found: brew去官网:https://brew.sh/index_zh-cn 找到安装命令:/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"2、bre...

2019-02-13 15:46:11

在VMware中使用Nat方式设置静态IP

引用参考:https://www.cnblogs.com/jsonhc/p/7685393.html在VMware中使用Nat方式设置静态IP为了在公司和家中不改变ip,所以采用vm的NAT模式来设置静态ip1、vm采用NAT模式联网2、编辑vm虚拟机设置 3、查看该网段的网关 可以看出网关为192.168.44.2,然后开始设置静态ip然后查看win本机的vnet8网段:可以看出在同一网段了,于...

2018-04-09 15:30:13

eclipse 搭建一个tomcat 传输文件

1.new -project--dynamic web project2.Windows-->show view-->serversadd tomcat server3.在webContent目录下,放置需要传输的文件一定要刷新出来,eclipse上面能看到,否则页面会报404。当然如果是单独启动的tomcat就不需要了:...

2018-04-04 15:46:08

如何在Linux系统搭建jupyter notebook

参考如何在Linux系统搭建jupyter notebookhttps://blog.csdn.net/langhailove_2008/article/details/79110949准备环境,官网下载:linux版本的anaconda3:https://www.anaconda.com/download/#linux上传到server1.执行脚本:sudo sh Anaconda3-5.1...

2018-04-04 14:14:52

英文介绍

介绍项目Do some development work, and force on bigdata, about hadoop spark computer coding.Recently I work data anatyce about  sas logic transform spark rdd to implementbecause some metrics can not

2018-02-06 15:42:20

Lambda 表达式

Passing Functions to SparkScalaJavaPythonSpark’s API relies heavily on passing functions in the driver program to run on the cluster. In Java, functions are represented by classes implementi

2016-09-10 16:12:19

JSONObject_v3

package json;import net.sf.json.JSONArray;import net.sf.json.JSONObject;import java.io.*;import java.util.ArrayList;/** * Created by xz86173 on 2/5/2016. */public class JSONObject_

2016-02-24 11:52:39

sbt编译spark源码

[username@server1 spark-1.5.2]$ /data/2/functionId/tmp/compile/sbt/bin/sbt gen-ideaGetting org.scala-sbt sbt 0.13.7 ...You probably access the destination server through a proxy server that is

2016-01-04 16:16:35

IntelliJ IDEA使用说明

安装完Intellij后请记得安装scala插件,此处跳过1.创建scala项目:IntelliJ IDEA使用说明单击右键,run第一次因为要进行编译时间会比较久,第二次运行就快多了进行spark开发:对刚才的程序打包:接下来进行build:

2015-12-06 09:12:47

SimpleGraphX PageRank shell

package week7import org.apache.log4j.{Level, Logger}import org.apache.spark.{SparkContext, SparkConf}import org.apache.spark.graphx._import org.apache.spark.rdd.RDDobject SimpleGraphX { def m

2015-12-02 08:36:35

hive参数调节

一.优化切入后session作用域set mapred.job.priorityset mapred.job.priority=VERY_HIGH整体map执行90%,才会启动reducedMR中间压缩set hive.exec.compress.intermediate=true;   hive开启压缩set mapred.compress.map.o

2015-12-02 07:39:26

SparkSqlForTest

package week4/** * Created by Administrator on 2015/3/31. */import java.text.SimpleDateFormatimport org.apache.spark.{SparkConf, SparkContext}import org.apache.spark.SparkConte

2015-12-02 07:37:43

english

1.No animal experiment,[ɪk'sperɪmənt],When we were on the operating['ɒpəreɪtɪŋ]  table.Human is the object of the experiment。If we can choose,the little mouse, or your family.How would you c

2015-11-29 23:24:18

Deploy_Cluster_CDH

1.卸载java:rpm -qa | grep java  卸载:rpm -e --nodeps  查询出来的包名2.配置ssh: cd  ~/.ssh/                     ssh-keygen -t rsa                  cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys      

2015-11-27 07:57:28

spark cluster 下 spark-shell/spark-spark提交wordcount sparksql Demo

一:环境配置My conf/spark-env.sh is:export SPARK_MASTER_IP=node1.cluster.localexport SPARK_WORKER_CORES=20export SPARK_WORKER_MEMORY=12gexport SPARK_WORKER_DIR=/scratch/cperez/sparkexport 

2015-11-27 07:34:01

kafka安装及使用

一:kafka的安装1.n5上面下载:wget http://apache.dataguru.cn/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz2.vi /usr/lib/kafka/config/server.properties   broker.id=5log.dirs=/usr/lib/kafka/kafka-logslog.flush.i

2015-11-26 23:04:51

Deploy_Cluster_Apache

一:环境部署修改主机名:vi /etc/sysconfig/network修改主机名和IP的映射关系:vim /etc/hostsjava安装卸载之前版本的java1.卸载java:rpm -qa | grep java  卸载:rpm -e --nodeps  查询出来的包名1.cd /opttar -zxvf jdk-7u75-linux-x64.tar.gz

2015-11-26 23:01:22

maven 安装纪录

一:在线安装方式m2e - http://q4e.googlecode.com/svn/trunk/updatesite-iam/ Help -> Install New Software…1.http://download.eclipse.org/technology/m2e/releases 2.http://www.fuin.org/p2-repository/ 

2015-11-24 23:04:05

spark 单机模式

一.安装scalahttp://www.scala-lang.org/download/2.10.6.htmltar -zxvf scala-2.10.5.tar二.安装spark1.下载http://spark.apache.org/downloads.html选择:pre build for hadoop 2.6 later spark-1.5.2-bi

2015-11-24 14:47:56

hadoop 单机模式

一:安装hadoop1.binaryhttp://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.6.2/hadoop-2.6.2.tar.gz2.hadoop2.6 需要jdk1.7的环境,mac自带的是jdk1.6,升级方法见上一篇文章3.tar -zxvf hadoop-2.6.2.tar.gz4.cd had

2015-11-24 00:09:17

查看更多

勋章 我的勋章
  • 持之以恒
    持之以恒
    授予每个自然月内发布4篇或4篇以上原创或翻译IT博文的用户。不积跬步无以至千里,不积小流无以成江海,程序人生的精彩需要坚持不懈地积累!