×

Samza 入门

Samza 你好Samza 下载Samza 功能预览

Samza 详细介绍

Samza 背景Samza 概念Samza 结构

Samza 与其他流处理项目比较

Samza 比较介绍Samza MUPD8Samza 与 StormSamza 与 Spark Streaming

Samza API

Samza API概述

Samza 核心

Samza ContainerSamza 流Samza 序列化Samza 定期检查Samza 状态管理Samza 窗口功能Samza 协调器流Samza 事件循环Samza 指标Samza JMX

Samza 作业部署

Samza JobRunnerSamza 配置Samza 打包Samza YARN工作Samza 记录Samza 再处理Samza Web UI和REST APISamza 分离框架和作业部署

Samza YARN

Samza Application MasterSamza YARN执行隔离Samza 主机关联和 YARNSamza YARN资源本地化Samza YARN安全Samza 写入HDFSSamza 从HDFS文件读取

Samza 相关操作

Samza 安全Samza Kafka自动创建主题

Samza REST服务

Samza REST服务概观Samza REST服务资源Samza REST服务监视器

Samza 附录

附录一 工作资源附录二 任务资源附录三 远程调试附录四 从HDFS部署Samza工作附录五 部署Samza Job到CDH附录六 在多节点YARN中运行附录七 在没有联网的情况下运行附录八 Samza REST入门附录九 Async API和多线程指南附录十 代码附录十一 Samza配置参考

Samza 写入HDFS


samza-hdfs 模块实现了一个 Samza Producer 来写入 HDFS。当前的实现包括一个现成的 HdfsSystemProducer,和三个 HdfsWriterS:一个写入原始字节的消息到SequenceFile 的 BytesWritable 键和值;另一个写入 UTF-8 Strings 到一个 SequenceFile 与 LongWritable 键和 Text 值;最后一个写出 Avro 数据文件,包括自动反映的 POJO 对象的模式。

配置 HdfsSystemProducer

您可以像任何其他 Samza 系统一样配置 HdfsSystemProducer:使用 job.properties 文件中设置的配置键和值。您可以配置系统生产者以供您使用,StreamTasks 如下所示:

# set the SystemFactory implementation to instantiate HdfsSystemProducer aliased to 'hdfs-clickstream'
systems.hdfs-clickstream.samza.factory=org.apache.samza.system.hdfs.HdfsSystemFactory

# define a serializer/deserializer for the hdfs-clickstream system
# DO NOT define (i.e. comment out) a SerDe when using the AvroDataFileHdfsWriter so it can reflect the schema
systems.hdfs-clickstream.samza.msg.serde=some-serde-impl

# consumer configs not needed for HDFS system, reader is not implemented yet

# Assign a Metrics implementation via a label we defined earlier in the props file
systems.hdfs-clickstream.streams.metrics.samza.msg.serde=some-metrics-impl

# Assign the implementation class for this system's HdfsWriter
systems.hdfs-clickstream.producer.hdfs.writer.class=org.apache.samza.system.hdfs.writer.TextSequenceFileHdfsWriter
#systems.hdfs-clickstream.producer.hdfs.writer.class=org.apache.samza.system.hdfs.writer.AvroDataFileHdfsWriter

# Set compression type supported by chosen Writer. Only BLOCK compression is supported currently
# AvroDataFileHdfsWriter supports snappy, bzip2, deflate or none (null, anything other than the first three)
systems.hdfs-clickstream.producer.hdfs.compression.type=snappy

# The base dir for HDFS output. The default Bucketer for SequenceFile HdfsWriters
# is currently /BASE/JOB_NAME/DATE_PATH/FILES, where BASE is set below
systems.hdfs-clickstream.producer.hdfs.base.output.dir=/user/me/analytics/clickstream_data

# Assign the implementation class for the HdfsWriter's Bucketer
systems.hdfs-clickstream.producer.hdfs.bucketer.class=org.apache.samza.system.hdfs.writer.JobNameDateTimeBucketer

# Configure the DATE_PATH the Bucketer will set to bucket output files by day for this job run.
systems.hdfs-clickstream.producer.hdfs.bucketer.date.path.format=yyyy_MM_dd

# Optionally set the max output bytes (records for AvroDataFileHdfsWriter) per file.
# A new file will be cut and output continued on the next write call each time this many bytes
# (records for AvroDataFileHdfsWriter) are written.
systems.hdfs-clickstream.producer.hdfs.write.batch.size.bytes=134217728
#systems.hdfs-clickstream.producer.hdfs.write.batch.size.records=10000 

假设上述配置已经针对同一文件中的其他位置的标签 some-serde-impl 和 some-metrics-impl 标签正确配置了度量标准和序列实现 job.properties。这些属性中的每一个都具有合理的默认值,因此您可以省略不需要为您的工作运行定制的属性。


分类导航

关注微信下载离线手册

bootwiki移动版 bootwiki
(群号:472910771)