- 博客(127)
- 收藏
- 关注
转载 MacOS catalina系统Vmware黑屏无法操作
解决无法添加VMware辅助功能的问题。重新打开Rootless机制。关闭Rootless机制。解决VMWare黑屏问题。
2023-07-29 10:02:25 257
原创 Dinky:问题总结
一、启动时指定flink版本,因为dinky本身也集成了部分flink。不要相信官网,官网只是个简单举例(下面为官网截图)二、数据源管理新增mysql时的url。
2023-06-30 10:39:35 365
原创 Kafka:spark.rdd.MapPartitionsRDD cannot be cast to streaming.kafka010.HasOffsetRange
①当通过KafkaUtils.createDirectStream方法接收到kafka信息后返回的是JavaInputDStream类型的数据时,后续通过转换算子返回后的数据格式是JavaDStream。②如果后续多个算子想用JavaInputDStream类型的数据,需要将代码拆开,分成多份,否则会报错。
2023-06-20 10:43:53 101
原创 Spark:failed to launch: nice -n 0 /opt/spark/bin/spark-class org.apache.spark.deploy.worker.
node03: failed to launch: nice -n 0 /opt/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://node01:7077node03: full log in /opt/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-node03.out
2022-09-03 16:28:03 2413
转载 xxx is not in the sudoers file.This incident will be reported.的解决方法
子用户使用sudo执行命令
2022-07-26 10:48:31 543
原创 Spark:spark2.4.0安装
软件准备:Index of /dist/spark,选择跟hadoop集成的版本1,解压:tar -zxvf spark-2.4.0-bin-hadoop2.6.tgz mv spark-2.4.0-bin-hadoop2.6 spark vim /etc/profile.d/bigdata-etc.shexport SPARK_HOME=/opt/sparkexport PATH=$PATH:$SPARK_HOME/bin:$S......
2022-05-31 16:44:19 1158
原创 Hive: Task failed task_ Job failed as tasks failed. failedMaps:1 failedReob failed as tasks failed
beeline在插入大批量的数据时报错,但是hive能插入改为插入之前先查询一下,报出错误:GC overhead limit exceeded这就好多了,不就是jvm内存溢出了嘛,因为hive能插入,所以不用改hive-site.xml配置文件,只改hive-env.sh就行,打开HADOOP_HEAPSIZE并调大,if语句中也跟着改if [ "$SERVICE" = "cli" ]; then if [ -z "$DEBUG" ]; then export HADOOP_
2022-05-16 15:06:29 1410
转载 Linux:Root(管理员)新建用户,并赋普通用户文件夹的权限,同时普通用户设置文件权限仅自己能访问
1、新建用户(1)为了获取创建用户的权限,切换为root用户peng@ubuntu:~$ sudo su(2)添加一个新用户(如用户名为xyz)root@ubuntu:/home/peng# adduser xyz然后根据系统提示,1、输入密码;2、再次确认密码;之后一直回车,直到输入y回车完成新用户xyz的创建2、赋普通用户文件夹的权限命令是 chown -R 用户名 文件夹路径例:sudo chown -R xyz /mnt/ssd1/yuyu(请注意:1、文件夹/mnt/
2022-05-12 18:17:24 7278
转载 Flink:flink1.12.0 on yarn部署
1.下载安装包Index of /dist/flink2.上传flink-1.12.0-bin-scala_2.12.tgz到node01的指定目录3.解压:tar -zxvf flink-1.12.0-bin-scala_2.12.tgz4、修改名称 mv flink-1.12.0-bin-scala_2.12 flink5、添加系统环境变量 并source生效 export FLINK_HOME=/opt/flink export PATH=$PATH:$......
2022-05-12 18:16:02 751
原创 Kafka:The Cluster ID 6OgYsn2hRUm5KnEHt9XXoA doesn‘t match stored clusterId Some(mAzhR9xTQBeC7BNhiWVS
[2022-05-10 09:15:36,223] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)[2022-05-10 09:15:36,360] INFO Cluster ID = 6OgYsn2hRUm5KnEHt9XXoA (kafka.server.KafkaServer)[2022-05-10 09:15:36,366] ERROR Fatal error during Kaf.
2022-05-10 09:29:05 346
原创 Hadoop:HA 踩坑 - 所有 namenode 都是standby
状况:所有namenode都是standby,但是zookeeper启动正常,即ZK服务未生效,不能高可用切换HA尝试一:手动强制转化某个namenode为active操作:在某台namenode上,执行 hdfs haadmin -transitionToActive --forcemanual nn1 (nn1是你的某台nameservice-id)结果:nn1被成功转为active。但是在stop-dfs.sh后再一次start-dfs.sh后,所有namenode仍然都是stan
2022-05-09 15:52:26 473
原创 Flink:flink问题总结
问题一:Caused by: org.apache.flink.configuration.IllegalConfigurationException: Sum of configured Framework Heap Memory (128.000mb (134217728 bytes)), Framework Off-Heap Memory (128.000mb (134217728 bytes)), Task Off-Heap Memory (0 bytes), Managed Memory (5
2022-04-27 09:20:31 2617
转载 Flink:问题及解决:ClassNotFoundException: org.apache.hadoop.security.UserGroupInformation
此问题是因为flink 执行环境下缺少相关jar:可以在lib 下补充以下jar 包 aws-java-sdk-s3-1.11.1030.jar flink-shaded-hadoop-2-uber-3.1.1.3.0.1.0-187-10.0.jar hadoop-aws-3.1.0.jar(3.1.0是你自己hadoop版本号)...
2022-04-26 10:44:04 2999
原创 Hive:Schema version 1.2.0 does not match metastore‘s schema version 2.1.0 问题
在hive-site.xml里关闭元数据验证机制<property> <name>hive.metastore.schema.verification</name> <value>false</value></property>
2022-04-25 13:41:47 1874
原创 Hive:Schema initialization FAILED Metastore state would be inconsistent
[root@node3 hive-2.3.4]# schematool -dbType mysql -initSchemaSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/opt/hive-2.3.4/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found bindin
2022-04-02 14:00:18 3869 1
转载 Hive:beeline启动Found class jline.Terminal, but interface was expected...
Found class jline.Terminal, but interface was expected...
2022-04-02 13:53:40 148
转载 Spark:spark on yarn任务java.nio.channels.ClosedChannelException
原因:给节点分配的内存少,yarn kill了spark application解决:配置yarn-site.xml<property> <name>yarn.nodemanager.pmem-check-enabled</name> <value>false</value></property><property> <name>yarn.nodemanager.vmem-che......
2022-04-02 13:47:42 315
转载 Centos7:安装Mysql
wget \https://cdn.mysql.com/archives/mysql-5.7/mysql-community-client-5.7.32-1.el7.x86_64.rpm \https://cdn.mysql.com/archives/mysql-5.7/mysql-community-common-5.7.32-1.el7.x86_64.rpm \https://cdn.mysql.com/archives/mysql-5.7/mysql-community-libs-5.7.32-
2022-03-30 11:03:00 184
转载 Python:在for循环下,列表(list)append字典(dict)后最后一条数据重复出现
num=[i for i in range(5)]data=["a","b","c","d","e"]json={}collect=[]for index,content in enumerate(data): json['name']=data[index] collect.append(json)print(collect)得出结果[{'name': 'e'}, {'name': 'e'}, {'name': 'e'}, {'name': 'e'}, {'name.
2021-04-25 16:51:50 1849
原创 Mysql:各方法函数应用
1,数据拼接concat,concat_wsconcat 拼接字符串(按前后顺序拼接),concat_ws(concat with separator)有分隔符的拼接,分隔符放在前select concat('10');//10select concat('11','22','33');//112233SELECT CONCAT(last_name,'_',first_name) 姓名 FROM employees;select concat_ws(',','11','22','33');//
2021-04-25 16:44:33 94
转载 Spark:“main” java.lang.IllegalArgumentException: Illegal pattern component: XXX 报错
Spark2.3.0解决Exception in thread “main” java.lang.IllegalArgumentException: Illegal pattern component: XXX 报错出错的调用代码此问题出现在调用spark.read.json或者csv的时候出现。res.write.mode(“append”).json(“c://out”)123说明maven升级的时候,没有自动加载完整依赖包,jsonAPI对于timeStampFormat有特殊需.
2021-04-16 15:46:42 493
转载 Hive:常用系统函数-时间函数
函数 参数格式 解释 from_unixtime from_unixtime(bigint unixtime[, string format]) 将unix时间戳转换为当前所在时区的字符串时间,格式为"yyyy-MM-dd HH:mm:ss" unix_timestamp unix_timestamp() 以秒为单位获取当前的Unix时间戳。 unix_timestamp unix_timestamp(string date) 将格式为"yyyy-
2021-02-24 13:44:51 330
空空如也
空空如也
TA创建的收藏夹 TA关注的收藏夹
TA关注的人