-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathatom.xml
516 lines (285 loc) · 210 KB
/
atom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Zhou's Blog</title>
<link href="/atom.xml" rel="self"/>
<link href="http://chzhou.cc/"/>
<updated>2019-03-18T14:25:11.979Z</updated>
<id>http://chzhou.cc/</id>
<author>
<name>Zhou</name>
</author>
<generator uri="http://hexo.io/">Hexo</generator>
<entry>
<title>TVM_SGX</title>
<link href="http://chzhou.cc/2019/01/01/tvm_sgx_doc/"/>
<id>http://chzhou.cc/2019/01/01/tvm_sgx_doc/</id>
<published>2019-01-01T14:19:26.000Z</published>
<updated>2019-03-18T14:25:11.979Z</updated>
<content type="html"><![CDATA[<h1 id="TVM-SGX"><a href="#TVM-SGX" class="headerlink" title="TVM_SGX"></a>TVM_SGX</h1><blockquote><p>文档分为两部分,第一部分为TVM自身及SGX属性的编译,第二部分为SGX APP的编译</p></blockquote><h2 id="TVM-编译"><a href="#TVM-编译" class="headerlink" title="TVM 编译"></a>TVM 编译</h2><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br></pre></td><td class="code"><pre><span class="line">cd /mnt</span><br><span class="line">mkdir build && cd build</span><br><span class="line">cmake .. -DUSE_LLVM=ON -DUSE_SGX=/opt/sgxsdk -DRUST_SGX_SDK=/opt/rust-sgx-sdk</span><br><span class="line">make -j4</span><br></pre></td></tr></table></figure><p>根据<a href="https://github.com/dmlc/tvm/tree/master/apps/sgx" target="_blank" rel="noopener">文档</a>,在启动好Docker后,进行编译。这里的编译是先由<code>CMakeLists.txt</code>生成<code>Makefile</code>,再进行编译。</p><ul><li><p>CMakeLists.txt</p><p>在TVM主目录下(即/mnt)下,有总的CMakeLists.txt,其中关键语句为:</p><figure class="highlight cmake"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">tvm_option(USE_SGX <span class="string">"Build with SGX"</span> <span class="keyword">OFF</span>)</span><br></pre></td></tr></table></figure><p>在这里开启TVM编译时的SGX选项。</p></li><li><p>在 /mnt/cmake/modules/SGX.cmake里,对SGX的部分进行编译</p><p>其中关键语句是在这里,使用SGX SDK里面的sgx_edger8r对<code>tvm.edl</code>进行解析,生成<code>tvm_t.c/h</code>和<code>tvm_u.c/h</code> </p><figure class="highlight cmake"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">add_custom_command</span>(</span><br><span class="line"> OUTPUT <span class="variable">${_tvm_u_h}</span></span><br><span class="line"> <span class="keyword">COMMAND</span> <span class="variable">${USE_SGX}</span>/bin/x64/sgx_edger8r --untrusted</span><br><span class="line"> --untrusted --untrusted-dir <span class="variable">${_sgx_src}</span>/untrusted</span><br><span class="line"> --trusted --trusted-dir <span class="variable">${_sgx_src}</span>/trusted</span><br><span class="line"> --search-path <span class="variable">${USE_SGX}</span>/<span class="keyword">include</span> --search-path <span class="variable">${RUST_SGX_SDK}</span>/edl</span><br><span class="line"> <span class="variable">${_tvm_edl}</span></span><br><span class="line"> <span class="keyword">COMMAND</span> sed -i <span class="string">"4i '#include <tvm/runtime/c_runtime_api.h>'"</span> <span class="variable">${_tvm_u_h}</span></span><br><span class="line"> <span class="keyword">COMMAND</span> sed -i <span class="string">"4i '#include <tvm/runtime/c_runtime_api.h>'"</span> <span class="variable">${_tvm_t_h}</span></span><br><span class="line"> DEPENDS <span class="variable">${_tvm_edl}</span></span><br><span class="line"> )</span><br></pre></td></tr></table></figure></li><li><p>TVM本身带有的SGX的代码在 /mnt/src/runtime/sgx/下。在此不详述</p></li></ul><h2 id="SGX-APP编译"><a href="#SGX-APP编译" class="headerlink" title="SGX APP编译"></a>SGX APP编译</h2><p>总的来说,SGX APP的编译分为两部分,一部分为tvm model的编译,另一部分则是enclave的编译。其中该部分的目录在 /mnt/apps/sgx</p><ul><li><p>首先安装依赖库</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">pip install -e python -e topi/python -e nnvm/python</span><br></pre></td></tr></table></figure></li><li><p>外部程序调用enclave的时候都是引用的<code>enclave.signed.so</code>,所以在看Makefile的时候主要盯着<code>enclave.signed.so</code>的产生流程。</p></li></ul><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br></pre></td><td class="code"><pre><span class="line">graph TB</span><br><span class="line"> subgraph TVM_model Part</span><br><span class="line"> J(build_model.py)-->I </span><br><span class="line"> I(model.bc)-->H </span><br><span class="line"> H(model.o)-->F</span><br><span class="line"> end</span><br><span class="line"> subgraph SGX Part</span><br><span class="line"> G(xargo build --target x86_64-unknown-linux-sgx)</span><br><span class="line"> end</span><br><span class="line"> G(xargo build --target x86_64-unknown-linux-sgx)-->|使用./src/lib.rs|E</span><br><span class="line"></span><br><span class="line"> F(libmodel.a)-->E </span><br><span class="line"> E(libmodel_enclave.a)-->|复制为|C </span><br><span class="line"> D(libtvm_t.a)-->B </span><br><span class="line"> C(libenclave.a)-->B </span><br><span class="line"> B(enclave.so)-->|signing|A[enclave.signed.so]</span><br></pre></td></tr></table></figure><ul><li><p><code>libtvm_t.a</code>哪里来的?</p><p>经过实验(注释掉以下代码则不会产生<code>libtvm_t.a</code>),是在 /mnt/cmake/modules/SGX.cmake 里产生的,代码为:</p><figure class="highlight cmake"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">#build trusted library</span></span><br><span class="line"><span class="keyword">set_source_files_properties</span>(<span class="variable">${_tvm_t_c}</span> PROPERTIES GENERATED <span class="keyword">TRUE</span>)</span><br><span class="line"><span class="keyword">add_library</span>(tvm_t STATIC <span class="variable">${_tvm_t_c}</span>)</span><br><span class="line"><span class="keyword">add_dependencies</span>(tvm_t sgx_edl)</span><br><span class="line"><span class="keyword">target_include_directories</span>(tvm_t PUBLIC <span class="variable">${USE_SGX}</span>/<span class="keyword">include</span> <span class="variable">${USE_SGX}</span>/<span class="keyword">include</span>/tlibc)</span><br></pre></td></tr></table></figure><p>在根目录下的CMakeLists.txt引用该库为:</p><figure class="highlight cmake"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">if</span>(<span class="keyword">NOT</span> USE_SGX <span class="keyword">STREQUAL</span> <span class="string">"OFF"</span>)</span><br><span class="line"> <span class="keyword">add_dependencies</span>(tvm sgx_edl)</span><br><span class="line"> <span class="keyword">add_dependencies</span>(tvm_runtime sgx_edl tvm_t)</span><br><span class="line"> <span class="keyword">install</span>(TARGETS tvm_t ARCHIVE DESTINATION lib<span class="variable">${LIB_SUFFIX}</span>)</span><br><span class="line"><span class="keyword">endif</span>()</span><br></pre></td></tr></table></figure><p>同时,对<code>libtvm_t.a</code>使用<code>objdump</code>命令,输出为:</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="meta">$</span> objdump -f libtvm_t.a:</span><br><span class="line"></span><br><span class="line">In archive libtvm_t.a:</span><br><span class="line"></span><br><span class="line">tvm_t.c.o: file format elf64-x86-64</span><br><span class="line">architecture: i386:x86-64, flags 0x00000011:</span><br><span class="line">HAS_RELOC, HAS_SYMS</span><br><span class="line">start address 0x0000000000000000</span><br></pre></td></tr></table></figure><p>可从中具体得知archive的是tvm_t.c.o</p></li></ul>]]></content>
<summary type="html">
<h1 id="TVM-SGX"><a href="#TVM-SGX" class="headerlink" title="TVM_SGX"></a>TVM_SGX</h1><blockquote>
<p>文档分为两部分,第一部分为TVM自身及SGX属性的编译,第二部分为SGX
</summary>
<category term="SGX" scheme="http://chzhou.cc/tags/SGX/"/>
<category term="TVM" scheme="http://chzhou.cc/tags/TVM/"/>
</entry>
<entry>
<title>华录杯比赛文档</title>
<link href="http://chzhou.cc/2018/10/21/%E5%8D%8E%E5%BD%95%E6%9D%AF%E9%A1%B9%E7%9B%AE%E8%AF%B4%E6%98%8E/"/>
<id>http://chzhou.cc/2018/10/21/华录杯项目说明/</id>
<published>2018-10-21T08:20:33.000Z</published>
<updated>2019-03-18T15:27:00.553Z</updated>
<content type="html"><![CDATA[<h1 id="项目说明"><a href="#项目说明" class="headerlink" title="项目说明"></a>项目说明</h1><p>本次的赛题名称为“汉字档案手写识别大赛“,是“中国华录杯·开放数据创新应用大赛”复赛。最后我们队伍的成绩以编辑距离为评判准则,分数为0.18151,排名第二。</p><p>文档分为三个部分,分别为”配准“,”识别“,”匹配“三个内容。</p><h2 id="比赛任务"><a href="#比赛任务" class="headerlink" title="比赛任务"></a>比赛任务</h2><p>本次任务中,参赛队伍将获得某公司人力部门所提供的近1000份应聘人员登记表格扫描图片,其中包含应聘人员的性别、民族、生日和教育经历等基本信息(姓名联系方式亲属等个人身份敏感信息已进行严格脱敏处理),还包括应聘者的个人学术或生活中所获荣誉与工作技能。参赛者需要利用得到的近1000张扫描件进行模型构建,从每个pdf文件中监测到表格,并从表格中提取指定类别的内容,准确地识别更多的类似档案扫描文件。</p><p>本次比赛没有提供训练集,选手需自行寻找手写体数据,以完成模型的训练。测试集数据为脱敏后的《应聘登记表》,共有990张图片,是脱敏后的“应聘登记表”的扫描文件。每一份应聘登记表都包括应聘者性别、民族、生日、教育经历等基本信息,以及工作技能等求职相关信息。所有图片被分为2组,分别是为线上测试集和线下测试集。线上测试集共398张图片,可供参赛者下载,用于计算线上排名和调整模型;线下测试集,共592张(不提供下载),用于线下审核检查。</p><p>本次比赛的评分标准为编辑距离,同时考虑识别结果与正确结果之间的”增删改“。其中编辑距离的公式如下:</p><p>$$ Score = \frac{1}{M} \sum_{i=1}^{m} \sum_{j=1}^{n}d_{ij} $$</p><p>其中M为所有简历中待识别字段中的总汉字数量,m为简历的数量,n为简历中待识别的字段数量,d<sub>ij</sub>为第i份简历中第j个字段的编辑距离。</p><p>本次比赛的难点在于手写体的识别。由于汉字字符多,手写随意性大,相似和混淆汉字对多,另外公开提供的手写训练集也少,所以这是这次比赛的最大难点。</p><h2 id="配准"><a href="#配准" class="headerlink" title="配准"></a>配准</h2><h3 id="背景知识"><a href="#背景知识" class="headerlink" title="背景知识"></a>背景知识</h3><p>图像配准(registration)是指同一区域内以不同成像手段所获得的不同图像图形的坐标的匹配。包括几何纠正、投影变换与统一比例尺三方面的处理。图像配准在目标检测、模型重建、运动估计、特征匹配,肿瘤检测、病变定位、血管造影、地质勘探、航空侦察等领域都有广泛的应用。简单来说,就是将所有图片都转换成同一样子,包括图片中的重要元素位置也都处于相同的位置,这样的话方便后续的分析。</p><p>本次比赛提供的数据为扫描版的简历。扫描时由于对简历放置的不规范,导致扫描成像的简历图片中的简历信息区域有歪斜。有歪斜会对后续的识别过程有很大影响。比如在歪斜情况下,对于ROI区域的裁剪就会有很大概率将表格线裁入其中,影响识别结果。同时,各个图片歪斜的角度和范围也不一致,会对后续的各个过程带来不同程度的影响。所以将所有图片处理成同一版式,确保各个图片的重要元素都在相同的位置,对后续的识别过程有很好的帮助。这就是配准的目的。</p><p>根据待配准图像之间的关系,可以将图像配准分为多源图像配准、基于模板的配准、多角度图像配准、时间序列图像配准四大类。在本次比赛中,我们配准的类别属于基于模板的配准,它的方法特点是根据模板预先选定特征信息,根据这些信息再去配准待配准图像。常见的应用场景有模式识别,字符识别,标识确认,波形分析等。</p><p>图像配准的算法有很多,比如SIFT算法,SURF算法等。经过我们的实验,最后选取了SIFT算法作为我们的配准算法。</p><p>尺度不变特征转换(Scale-invariant feature transform或SIFT)是一种CV的算法,用来侦测与描述影像中的局部性特征,它在空间尺度中寻找极值点,并提取出其位置、尺度、旋转不变量。</p><p>SIFT算法的特点有:</p><ul><li>SIFT特征是图像的局部特征,其对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换、噪声也保持一定程度的稳定性</li><li>独特性好,信息量丰富,适用于在海量特征数据库中进行快速、准确的匹配</li><li>多量性,即使少数的几个物体也可以产生大量的SIFT特征向量</li><li>高速性,经优化的SIFT匹配算法甚至可以达到实时的要求</li><li>可扩展性,可以很方便的与其他形式的特征向量进行联合</li></ul><p>SIFT算法分解分为四步:</p><ol><li>尺度空间极值检测:搜索所有尺度上的图像位置。通过高斯微分函数来识别潜在的对于尺度和旋转不变的兴趣点。</li><li>关键点定位:在每个候选的位置上,通过一个拟合精细的模型来确定位置和尺度。关键点的选择依据于它们的稳定程度。</li><li>方向确定:基于图像局部的梯度方向,分配给每个关键点位置一个或多个方向。所有后面的对图像数据的操作都相对于关键点的方向、尺度和位置进行变换,从而提供对于这些变换的不变性。</li><li>关键点描述:在每个关键点周围的邻域内,在选定的尺度上测量图像局部的梯度。这些梯度被变换成一种表示,这种表示允许比较大的局部形状的变形和光照变化。</li></ol><h3 id="做法"><a href="#做法" class="headerlink" title="做法"></a>做法</h3><ol><li><p>模板配准1:首先对官方提供的下载文档进行模板配准,以使得所有简历的相对位置保持一致。由于2011年的简历、2013年以后的简历样式存在差别,本组考虑用两套模板进行配准工作。第一套以简历左上半部分的个人基本信息作为模板分类的依据,简历被分为了两类,两类的区别对比如下图所示:</p><p><img src="/images/hualu/1540100760115.png" alt="1540100760115"></p><p><img src="/images/hualu/1540100772779.png" alt="1540100772779"></p></li><li><p>模板配准2:第二套以简历左下半部分的学历信息作为模板分类的依据,主要为了提取“是否毕业”字段,简历被分为了三类,三类的区别对比如下图所示:</p><p> <img src="/images/hualu/1540100801041.png" alt="1540100801041"> <img src="/images/hualu/1540100806144.png" alt="1540100806144"> <img src="/images/hualu/1540100812088.png" alt="1540100812088"></p><p>简历配准主要使用了SIFT算法。</p><p>在简历分类时,以第一套模板为例,首先选好两张样式不同且扫描规范的简历图片作为源图片(20110029.jpg和20130143.jpg),再将所有官方提供的简历图片(即目的图片)依次与源图片的匹配坐标点个数进行比较。目的图片和哪一张源图片的匹配坐标点个数最多,就被分类至该源图片所在的类中;</p><p>在简历配准时,依据SIFT算法配准的原理,先找到变换矩阵M,再将目的图片向源图片配准,使得匹配坐标点重合在一起,这样便完成了配准工作。</p></li><li><p>抠图:在两个模板的配准工作结束后,所有简历的相对坐标位置已经能保持相同,这样便可以直接通过简历图片的像素坐标值把性别、体重、血型等信息依次取出。</p></li><li><p>创建提交模板文件:为使得每次成绩提交有效,首先需要依据提交格式创建模板文件:首先‘登记表编号’字段通过遍历官方提供的简历图片的名称得到,其他诸如‘性别’、‘民族’等字段的信息直接赋值为‘无’。</p></li></ol><h2 id="识别"><a href="#识别" class="headerlink" title="识别"></a>识别</h2><p>识别过程由两个队友共同完成。整体思路是对每个字段单独开发识别方法,最后进行整合。</p><ol><li><p>对于是否毕业,体重,血型,本科起止时间等字段:</p><ul><li>添加是否毕业信息:待识别的字段中包含高中、大专、本科、研究生四个阶段的是否毕业选项,这几个字段的内容和其余字段不同,是以打钩的方式完成填写而非手写文字的形式。因此在识别这些字段时,首先通过步骤3抠图精确定位需要打钩的方框位置,再比较‘是’和‘否’两方框像素值之和的大小。由于打钩的方框中所含黑色像素较多,因此像素值之和会比空白方框更小,因此在设定合适阈值之后,‘是否毕业’四个字段的内容可以非常精确地识别出来。提交结果显示,仅识别这四个字段的分数即可达到0.287。</li><li>添加体重信息:在抠图拿到体重字段的图片后,首先对图片进行预处理,包括数字分割以及二值化和缩放操作,使得其大小与mnist数据集的输入保持一致,均为28x28像素。预处理达到的效果如下图所示:</li></ul><p> <img src="/images/hualu/1540100900936.png" alt="1540100900936"> <img src="/images/hualu/1540101166120.png" alt="1540101166120"></p><ul><li><p>添加血型信息:在抠图拿到体重字段的图片后,首先对图片进行预处理,包括数字分割以及二值化和缩放操作,使得其大小与mnist数据集的输入保持一致,均为28x28像素。预处理达到的效果如下图所示:</p><p> <img src="/images/hualu/1540102505656.png" alt="1540102505656"> <img src="/images/hualu/1540102511206.png" alt="1540102511206"> </p><p>然后搭建一个2层的卷积神经网络,卷积核大小为5x5,池化层大小为3x3,以mnist扩展数据集emnist-letter为基础,只保留A、B、O三个字母的训练图片,加上部分手工标注数据作为训练集对网络进行训练,再对预处理后的图片进行识别。</p></li><li><p>添加本科起止时间信息:由于起止时间在抠图拿到体重字段的图片后,首先对图片进行预处理,包括数字分割以及二值化和缩放操作,使得其大小与mnist数据集的输入保持一致,均为28x28像素。</p><p>然后搭建一个2层的卷积神经网络,卷积核大小为5x5,池化层大小为2x2,以原始mnist数据集加上部分手工标注数据作为训练集对网络进行训练,再对预处理后的图片进行识别。</p></li></ul></li><li><p>对于其他字段:</p><ul><li>神经网络结构为:数据输入-卷积-池化-卷积-池化-卷积-池化-卷积-卷积-池化-全连接。卷积核大小为3x3,步长为1。</li><li>训练数据集为中科院发布的手写汉字数据集,并根据需要进行数据预处理,主要包括:添加噪声,图片切割的方式。</li></ul></li></ol><h2 id="匹配"><a href="#匹配" class="headerlink" title="匹配"></a>匹配</h2><p>匹配功能就是在上一步神经网络识别字段完成后,由于识别出来的字段不一定正确,在识别结果的基础上与语料库进行比对,将错误识别的字段进行修正,从而获得正确的字段。</p><p>我们进行匹配的字段有民族,籍贯,高中学校、专业、学位,大专学校、专业、学位,本科学校、专业、学位,研究生学校、专业、学位等。其中籍贯语料库的获取从网上公开资料获得,大专,本科,研究生学校及专业从教育部官网中获得,并进行整理。</p><p>匹配的输入为神经网络对相应字段识别出的Top3的结果,程序对Top3结果进行组合,通过正则匹配等方法在语料库中寻找对应的正确字段,从而进行修正。</p><h2 id="运行环境"><a href="#运行环境" class="headerlink" title="运行环境"></a>运行环境</h2><p>由于组员的分工不同,所擅长的编程语言不同,本项目中的语言大部分为Python,一小部分为C++,所依赖的环境有OpenCV,TensorFlow等。</p><ul><li><p>系统环境为Ubuntu 16.04</p></li><li><p>Python的版本为3.5</p></li><li><p>OpenCV版本</p><ul><li><p>图像ROI裁剪的OpenCV版本是编译的opencv-2.4.13.6</p><p> Python所需版本为:</p></li><li><p><code>pip3 install opencv-contrib-python==3.3.0</code></p></li></ul></li><li><p>TensorFlow版本为1.4.0</p></li></ul><h2 id="代码结构"><a href="#代码结构" class="headerlink" title="代码结构"></a>代码结构</h2><ol><li><p>配准及识别部分字段文件夹</p><p><img src="/images/hualu/1540103618643.png" alt="1540103618643"></p><p>文件包含内容为:</p><ul><li>EMNIST_Dataset: 训练模型所用的手写数字数据集和手写字母数据集</li><li>Initial: 官方提供的简历样本集合</li><li>models_AB: 训练好的血型模型</li><li>models_kg: 训练好的体重模型</li><li>module_partition: 根据登记表编号分为398个文件夹,每个文件夹均包含29个待识别模块</li><li>prepare_image_AB: 将血型模块字母分开后的模型输入,每个输入的图片大小为28x28</li><li>prepare_image_kg: 将体重模块数字分开后的模型输入,每个输入的图片大小为28x28</li><li>prepare_image_time: 将本科起止时间中数字分开后的输入,每个输入的图片大小为28x28</li><li>template_registration: 模板配准后的结果,module为第一个模板,用于切分简历上半部分的待识别模块,如性别、体重、血型等;yes_no为第二个模板,用于切分简历下半部分的待识别模块,如学校、学位、是否毕业等</li><li>baseline.csv: 根据提交格式创建出的提交模板</li><li>add_true_false_5.csv: 在baseline基础上添加高中、大专、本科、研究生是否毕业4个字段</li><li>add_weight_6.csv:在<em>5基础上添加体重字段</em></li><li>_add_blood_7.csv: 在_6基础上添加血型字段</li><li>add_time_8.csv:在_7基础上添加本科起止时间字段</li><li>AB_train.py: 血型模型的训练过程(无需手动运行,否则覆盖models_AB中的内容)</li><li>kg_train.py: 体重模型的训练过程(无需手动运行,否则覆盖models_kg中的内容)</li><li>main.py: 主函数,涵盖了本部分的所有运行代码,直接运行即可。</li></ul><p>打开main.py,其中主函数部分如图所示,每部分实现功能均在注释中标出:</p><p><img src="/images/hualu/1540103735382.png" alt="1540103735382"></p></li><li><p>其他长文本字段识别</p><p><img src="C:\Users\zhou\Desktop\批注.png" alt></p><p>文件包含内容为:</p><ul><li>checkpointxuexiaoexpand: 识别学校名称字段的模型</li><li>jiguancheckpoint: 识别籍贯的模型</li><li>minzucheckpoint: 识别民族的模型</li><li>xingbie: 识别性别的模型</li><li>zhuanyecheckpoint: 识别专业模型</li><li>roi: 存放处理后数据的文件,每份简历根据字段的位置提取相关区域进行单独识别,不同的字段对应不同的识别模型</li><li>crop_roi_all.cc:对配准后的图片进行ROI区域裁剪</li><li>fenlei.py: 用于简历配准后的分类</li><li>ocr.py: 为神经网络的训练以及调用接口</li><li>demosystop3.py: 产生结果程序。程序主要流程为: 读取roi文件中的图片根据字段类型,加载不同的模型进行识别,产生csv文件。</li></ul></li><li><p>识别结果匹配文件夹</p><p><img src="C:\Users\zhou\Desktop\批注222.png" alt></p><p>文件包含内容为:</p><ul><li>readcsv.py: 从上一步识别产生的csv文件中提取对应字段</li><li>re_等文件:对对应的字段进行匹配纠正,具体功能从文件名中可获得。其中 re_functions.py 定义了一些每个文件所需要的一些函数</li><li>writecsv.py:将识别纠正的结果覆盖到原来的csv中,产生最终版结果。</li><li>语料库文件:从公开资料获得到的各个语料信息。</li></ul></li></ol><h2 id="其他"><a href="#其他" class="headerlink" title="其他"></a>其他</h2><p>我们提交的文件中包含了所有文件。具体的执行流程可以参见batch.sh中的语句。也可以直接运行。</p><p>由于整个项目代码由各个队友按照任务分别完成,在整合到一起后未经过长时间的测试,所以在代码衔接部分有可能出现问题。此类状况出现时烦请联系我们,我们会及时进行反馈。</p>]]></content>
<summary type="html">
<h1 id="项目说明"><a href="#项目说明" class="headerlink" title="项目说明"></a>项目说明</h1><p>本次的赛题名称为“汉字档案手写识别大赛“,是“中国华录杯·开放数据创新应用大赛”复赛。最后我们队伍的成绩以编辑距离为评判准则
</summary>
<category term="hualu" scheme="http://chzhou.cc/tags/hualu/"/>
</entry>
<entry>
<title>Spark中SVM doc</title>
<link href="http://chzhou.cc/2018/10/12/Spark%E4%B8%ADSVM%20doc/"/>
<id>http://chzhou.cc/2018/10/12/Spark中SVM doc/</id>
<published>2018-10-12T13:44:22.000Z</published>
<updated>2019-03-18T14:24:05.828Z</updated>
<content type="html"><![CDATA[<h1 id="Spark中SVM分析"><a href="#Spark中SVM分析" class="headerlink" title="Spark中SVM分析"></a>Spark中SVM分析</h1><p>mllib中的svm只实现了线性二分类,没有非线性(核函数),也没有多分类和回归。</p><p>其中在初始化的时候,选取的是SGD(stochastic gradient descent)算法,在该算法运行的过程中,体现了spark的分布式运行。</p><h2 id="MlLib中SVM实现"><a href="#MlLib中SVM实现" class="headerlink" title="MlLib中SVM实现"></a>MlLib中SVM实现</h2><p>以下是svm的类的关系图(网上download的):</p><p><img src="https://upload-images.jianshu.io/upload_images/967544-bf2bb84db9564edf.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/620/format/webp" alt="SVM"></p><h3 id="一-程序入口"><a href="#一-程序入口" class="headerlink" title="一. 程序入口"></a>一. 程序入口</h3><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">SVMWithSGD</span> <span class="title">private</span> (<span class="params"></span></span></span><br><span class="line"><span class="class"><span class="params"> private var stepSize: <span class="type">Double</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> private var numIterations: <span class="type">Int</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> private var regParam: <span class="type">Double</span>,</span></span></span><br><span class="line"><span class="class"><span class="params"> private var miniBatchFraction: <span class="type">Double</span></span>)</span></span><br><span class="line"><span class="class"> <span class="keyword">extends</span> <span class="title">GeneralizedLinearAlgorithm</span>[<span class="type">SVMModel</span>] <span class="keyword">with</span> <span class="title">Serializable</span> </span>{</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 定义了损失函数和优化函数</span></span><br><span class="line"> <span class="keyword">private</span> <span class="keyword">val</span> gradient = <span class="keyword">new</span> <span class="type">HingeGradient</span>()</span><br><span class="line"> <span class="keyword">private</span> <span class="keyword">val</span> updater = <span class="keyword">new</span> <span class="type">SquaredL2Updater</span>()</span><br><span class="line"> <span class="meta">@Since</span>(<span class="string">"0.8.0"</span>)</span><br><span class="line"> <span class="comment">// new了一个梯度下降的类,命名为optimizer</span></span><br><span class="line"> <span class="keyword">override</span> <span class="keyword">val</span> optimizer = <span class="keyword">new</span> <span class="type">GradientDescent</span>(gradient, updater)</span><br><span class="line"> .setStepSize(stepSize)</span><br><span class="line"> .setNumIterations(numIterations)</span><br><span class="line"> .setRegParam(regParam)</span><br><span class="line"> .setMiniBatchFraction(miniBatchFraction)</span><br><span class="line"> <span class="keyword">override</span> <span class="keyword">protected</span> <span class="keyword">val</span> validators = <span class="type">List</span>(<span class="type">DataValidators</span>.binaryLabelValidator)</span><br><span class="line"></span><br><span class="line"> <span class="comment">/**</span></span><br><span class="line"><span class="comment"> * Construct a SVM object with default parameters: {stepSize: 1.0, numIterations: 100,</span></span><br><span class="line"><span class="comment"> * regParm: 0.01, miniBatchFraction: 1.0}.</span></span><br><span class="line"><span class="comment"> */</span></span><br><span class="line"> <span class="meta">@Since</span>(<span class="string">"0.8.0"</span>)</span><br><span class="line"> <span class="comment">// 默认参数</span></span><br><span class="line"> <span class="function"><span class="keyword">def</span> <span class="title">this</span></span>() = <span class="keyword">this</span>(<span class="number">1.0</span>, <span class="number">100</span>, <span class="number">0.01</span>, <span class="number">1.0</span>)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">override</span> <span class="keyword">protected</span> <span class="function"><span class="keyword">def</span> <span class="title">createModel</span></span>(weights: <span class="type">Vector</span>, intercept: <span class="type">Double</span>) = {</span><br><span class="line"> <span class="keyword">new</span> <span class="type">SVMModel</span>(weights, intercept)</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p><code>SVMWithSGD</code>里面实现了SVM模型基本的一些元素,包括</p><ol><li>继承了<code>GeneralizedLinearAlgorithm</code></li><li>定义了损失函数<code>HingeGradient()</code>,命名为 gradient</li><li>定义了L2正则化<code>SquaredL2Updater()</code>,命名为updater</li><li>以上面两种作为参数,new一个<code>GradientDescent()</code>,其接收的参数有两个,一个是梯度计算的损失函数,一个是优化函数,最后命名为optimizer</li><li>其他就是定义一些默认参数</li></ol><p>之后在接下来的<code>train()</code>函数里,调用<code>run()</code>进行模型运算。这里的<code>run()</code>继承自<code>GeneralizedLinearAlgorithm</code>类。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">train</span></span>(</span><br><span class="line"> input: <span class="type">RDD</span>[<span class="type">LabeledPoint</span>],</span><br><span class="line"> numIterations: <span class="type">Int</span>,</span><br><span class="line"> stepSize: <span class="type">Double</span>,</span><br><span class="line"> regParam: <span class="type">Double</span>,</span><br><span class="line"> miniBatchFraction: <span class="type">Double</span>,</span><br><span class="line"> initialWeights: <span class="type">Vector</span>): <span class="type">SVMModel</span> = {</span><br><span class="line"> <span class="comment">// new了一个SVMWithSGD类,然后调用run()</span></span><br><span class="line"> <span class="keyword">new</span> <span class="type">SVMWithSGD</span>(stepSize, numIterations, regParam, miniBatchFraction)</span><br><span class="line"> .run(input, initialWeights)</span><br><span class="line">}</span><br></pre></td></tr></table></figure><h3 id="二-运行过程"><a href="#二-运行过程" class="headerlink" title="二. 运行过程"></a>二. 运行过程</h3><p><code>run()</code>函数在GeneralizedLinearAlgorithm.scala文件里。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">run</span></span>(input: <span class="type">RDD</span>[<span class="type">LabeledPoint</span>], initialWeights: <span class="type">Vector</span>): <span class="type">M</span> = {</span><br><span class="line"> <span class="comment">// 省去一些初始化和为了计算所进行的优化过程</span></span><br><span class="line"> <span class="comment">// 之前定义好的optimizer调用optimize()函数</span></span><br><span class="line"> <span class="keyword">val</span> weightsWithIntercept = optimizer.optimize(data, initialWeightsWithIntercept)</span><br><span class="line"> <span class="comment">// 其他的一些过程</span></span><br></pre></td></tr></table></figure><p>在<code>run()</code>函数里,最关键的计算过程是在上一句,即由<code>new GradientDescent(gradient, updater)</code>生成的optimizer调用其<code>optimize()</code>函数进行优化。gradient是<code>HingeGradient()</code>函数,updater是<code>SquaredL2Updater()</code>。</p><h3 id="三-优化过程"><a href="#三-优化过程" class="headerlink" title="三. 优化过程"></a>三. 优化过程</h3><p>在GradientDescent.scala中,定义了<code>optimize()</code>函数:</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">optimize</span></span>(data: <span class="type">RDD</span>[(<span class="type">Double</span>, <span class="type">Vector</span>)], initialWeights: <span class="type">Vector</span>): <span class="type">Vector</span> = {</span><br><span class="line"> <span class="comment">// 调用runMiniBatchSGD()函数</span></span><br><span class="line"> <span class="keyword">val</span> (weights, _) = <span class="type">GradientDescent</span>.runMiniBatchSGD(</span><br><span class="line"> data,</span><br><span class="line"> gradient,</span><br><span class="line"> updater,</span><br><span class="line"> stepSize,</span><br><span class="line"> numIterations,</span><br><span class="line"> regParam,</span><br><span class="line"> miniBatchFraction,</span><br><span class="line"> initialWeights,</span><br><span class="line"> convergenceTol)</span><br><span class="line"> weights</span><br><span class="line"> }</span><br></pre></td></tr></table></figure><p>可以看出运行的是<code>runMiniBatchSGD()</code>函数:</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">runMiniBatchSGD</span></span>(</span><br><span class="line"> data: <span class="type">RDD</span>[(<span class="type">Double</span>, <span class="type">Vector</span>)],</span><br><span class="line"> gradient: <span class="type">Gradient</span>,</span><br><span class="line"> updater: <span class="type">Updater</span>,</span><br><span class="line"> stepSize: <span class="type">Double</span>,</span><br><span class="line"> numIterations: <span class="type">Int</span>,</span><br><span class="line"> regParam: <span class="type">Double</span>,</span><br><span class="line"> miniBatchFraction: <span class="type">Double</span>,</span><br><span class="line"> initialWeights: <span class="type">Vector</span>,</span><br><span class="line"> convergenceTol: <span class="type">Double</span>): (<span class="type">Vector</span>, <span class="type">Array</span>[<span class="type">Double</span>]) = {</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 省去计算的一些过程</span></span><br><span class="line"> </span><br><span class="line"> <span class="keyword">while</span> (!converged && i <= numIterations) {</span><br><span class="line"> <span class="comment">// 将weights广播出去</span></span><br><span class="line"> <span class="keyword">val</span> bcWeights = data.context.broadcast(weights)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 以下是源码中的注释</span></span><br><span class="line"> <span class="comment">// Sample a subset (fraction miniBatchFraction) of the total data</span></span><br><span class="line"> <span class="comment">// compute and sum up the subgradients on this subset (this is one map-reduce)</span></span><br><span class="line"> <span class="keyword">val</span> (gradientSum, lossSum, miniBatchSize) = data.sample(<span class="literal">false</span>, miniBatchFraction, <span class="number">42</span> + i).treeAggregate((<span class="type">BDV</span>.zeros[<span class="type">Double</span>](n), <span class="number">0.0</span>, <span class="number">0</span>L))(</span><br><span class="line"> seqOp = (c, v) => {</span><br><span class="line"> <span class="comment">// c: (grad, loss, count), v: (label, features)</span></span><br><span class="line"> <span class="keyword">val</span> l = gradient.compute(v._2, v._1, bcWeights.value, <span class="type">Vectors</span>.fromBreeze(c._1))</span><br><span class="line"> (c._1, c._2 + l, c._3 + <span class="number">1</span>)</span><br><span class="line"> },</span><br><span class="line"> combOp = (c1, c2) => {</span><br><span class="line"> <span class="comment">// c: (grad, loss, count)</span></span><br><span class="line"> (c1._1 += c2._1, c1._2 + c2._2, c1._3 + c2._3)</span><br><span class="line"> })</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 销毁广播变量weights</span></span><br><span class="line"> bcWeights.destroy(blocking = <span class="literal">false</span>)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 下面是计算完成后进行整理和输出log的一些语句</span></span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>在这个while循环计算梯度的时候,体现出spark分布式计算。</p><ol><li>先是由data所在的sc进行广播,将weights以广播变量的形式存入各个机器的缓存中。</li><li>data为rdd格式,<ol><li>调用<code>sample()</code>从每个partition中抽一些sample出来,第一个参数为<code>false</code> 意思为不放回的抽出,此时的各个sample仍在各自的partition中</li><li>调用<code>treeAggregate</code>函数对每个partition中的数据进行运算,最后在driver端进行汇总。<code>treeAggregate</code>函数里面有<code>seqOp</code> 和<code>combOp</code> 两个函数,其中<code>seqOp</code>定义了在每个partition中元素的操作,<code>combOp</code>定义了各个partition中元素进行aggregate时的规则,最后在driver端进行汇总计算。</li></ol></li></ol><p>在这个里面的<code>treeAggregate</code>函数进行了类似于KMeans中<code>reduceByKey()</code>和<code>collectAsMap()</code>的两步操作。</p><h3 id="四-附上HingeGradient-和SquaredL2Updater-的源码"><a href="#四-附上HingeGradient-和SquaredL2Updater-的源码" class="headerlink" title="四. 附上HingeGradient()和SquaredL2Updater()的源码"></a>四. 附上HingeGradient()和SquaredL2Updater()的源码</h3><p><code>HingeGradient()</code>的<code>compute()</code>函数:</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">HingeGradient</span> <span class="keyword">extends</span> <span class="title">Gradient</span> </span>{</span><br><span class="line"><span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">compute</span></span>(</span><br><span class="line"> data: <span class="type">Vector</span>,</span><br><span class="line"> label: <span class="type">Double</span>,</span><br><span class="line"> weights: <span class="type">Vector</span>,</span><br><span class="line"> cumGradient: <span class="type">Vector</span>): <span class="type">Double</span> = {</span><br><span class="line"> <span class="keyword">val</span> dotProduct = dot(data, weights)</span><br><span class="line"> <span class="comment">// Our loss function with {0, 1} labels is max(0, 1 - (2y - 1) (f_w(x)))</span></span><br><span class="line"> <span class="comment">// Therefore the gradient is -(2y - 1)*x</span></span><br><span class="line"> <span class="keyword">val</span> labelScaled = <span class="number">2</span> * label - <span class="number">1.0</span></span><br><span class="line"> <span class="keyword">if</span> (<span class="number">1.0</span> > labelScaled * dotProduct) {</span><br><span class="line"> axpy(-labelScaled, data, cumGradient)</span><br><span class="line"> <span class="number">1.0</span> - labelScaled * dotProduct</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> <span class="number">0.0</span></span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p><code>SquaredL2Updater()</code>:</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="class"><span class="keyword">class</span> <span class="title">SquaredL2Updater</span> <span class="keyword">extends</span> <span class="title">Updater</span> </span>{</span><br><span class="line"> <span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">compute</span></span>(</span><br><span class="line"> weightsOld: <span class="type">Vector</span>,</span><br><span class="line"> gradient: <span class="type">Vector</span>,</span><br><span class="line"> stepSize: <span class="type">Double</span>,</span><br><span class="line"> iter: <span class="type">Int</span>,</span><br><span class="line"> regParam: <span class="type">Double</span>): (<span class="type">Vector</span>, <span class="type">Double</span>) = {</span><br><span class="line"> <span class="comment">// add up both updates from the gradient of the loss (= step) as well as</span></span><br><span class="line"> <span class="comment">// the gradient of the regularizer (= regParam * weightsOld)</span></span><br><span class="line"> <span class="comment">// w' = w - thisIterStepSize * (gradient + regParam * w)</span></span><br><span class="line"> <span class="comment">// w' = (1 - thisIterStepSize * regParam) * w - thisIterStepSize * gradient</span></span><br><span class="line"> <span class="keyword">val</span> thisIterStepSize = stepSize / math.sqrt(iter)</span><br><span class="line"> <span class="keyword">val</span> brzWeights: <span class="type">BV</span>[<span class="type">Double</span>] = weightsOld.asBreeze.toDenseVector</span><br><span class="line"> brzWeights :*= (<span class="number">1.0</span> - thisIterStepSize * regParam)</span><br><span class="line"> brzAxpy(-thisIterStepSize, gradient.asBreeze, brzWeights)</span><br><span class="line"> <span class="keyword">val</span> norm = brzNorm(brzWeights, <span class="number">2.0</span>)</span><br><span class="line"></span><br><span class="line"> (<span class="type">Vectors</span>.fromBreeze(brzWeights), <span class="number">0.5</span> * regParam * norm * norm)</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure>]]></content>
<summary type="html">
<h1 id="Spark中SVM分析"><a href="#Spark中SVM分析" class="headerlink" title="Spark中SVM分析"></a>Spark中SVM分析</h1><p>mllib中的svm只实现了线性二分类,没有非线性(核函数),也没有
</summary>
<category term="Spark" scheme="http://chzhou.cc/tags/Spark/"/>
</entry>
<entry>
<title>Spark中KMeans doc</title>
<link href="http://chzhou.cc/2018/10/11/Spark%E4%B8%ADK-Means%20doc/"/>
<id>http://chzhou.cc/2018/10/11/Spark中K-Means doc/</id>
<published>2018-10-11T01:17:12.000Z</published>
<updated>2019-03-18T14:23:23.235Z</updated>
<content type="html"><![CDATA[<h1 id="KMeans"><a href="#KMeans" class="headerlink" title="KMeans"></a>KMeans</h1><h2 id="例子代码"><a href="#例子代码" class="headerlink" title="例子代码"></a>例子代码</h2><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="comment">//原文链接:http://dblab.xmu.edu.cn/blog/1454-2/</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">import</span> org.apache.spark.ml.clustering.{<span class="type">KMeans</span>,<span class="type">KMeansModel</span>}</span><br><span class="line"><span class="keyword">import</span> org.apache.spark.ml.linalg.<span class="type">Vector</span></span><br><span class="line"><span class="keyword">import</span> org.apache.spark.ml.linalg.<span class="type">Vectors</span> <span class="comment">// 原文是Vectors,但是出错,经过搜索发现是Vector :http://lxw1234.com/archives/2016/01/605.htm</span></span><br><span class="line"><span class="keyword">import</span> spark.implicits._ <span class="comment">//开启隐式转换</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">case</span> <span class="class"><span class="keyword">class</span> <span class="title">model_instance</span> (<span class="params">features: <span class="type">Vector</span></span>) <span class="title">//开启隐式转换和创建这个model_instance(好像)是调用</span> .<span class="title">toDF</span>(<span class="params"></span>) <span class="title">的必要条件</span></span></span><br><span class="line"><span class="class"> </span></span><br><span class="line"><span class="class"><span class="title">val</span> <span class="title">rawData</span> </span>= sc.textFile(<span class="string">"hdfs://lotus02:9000/user/chzhou/data.txt"</span>)</span><br><span class="line"></span><br><span class="line"><span class="keyword">val</span> df = rawData.map(line =></span><br><span class="line"> { model_instance( <span class="type">Vectors</span>.dense(line.split(<span class="string">","</span>).filter(p => p.matches(<span class="string">"\\d*(\\.?)\\d*"</span>)) <span class="comment">// '\\d'为匹配数字</span></span><br><span class="line"> .map(_.toDouble)) )}).toDF()</span><br><span class="line"></span><br><span class="line"><span class="keyword">val</span> kmeansmodel = <span class="keyword">new</span> <span class="type">KMeans</span>().</span><br><span class="line"> setK(<span class="number">3</span>).</span><br><span class="line"> setFeaturesCol(<span class="string">"features"</span>).</span><br><span class="line"> setPredictionCol(<span class="string">"prediction"</span>).</span><br><span class="line"> fit(df)</span><br></pre></td></tr></table></figure><p>数据采用的是<a href="http://dblab.xmu.edu.cn/blog/wp-content/uploads/2017/03/iris.txt" target="_blank" rel="noopener">iris</a>数据,有四个实数值的特征,分别代表花朵四个部位的尺寸,以及该样本对应鸢尾花的亚种类型(共有3种亚种类型)。</p><h2 id="过程分析"><a href="#过程分析" class="headerlink" title="过程分析"></a>过程分析</h2><p>对程序在spark-shell中按条输入,在最后一步 KMeans.fit(df) 的时候整个程序转换才开始进行。</p><p>整个程序总共有9个job,如下图所示:</p><p><img src="https://i.loli.net/2018/10/11/5bbea460c3933.png" alt="job.png"></p><p>总共分为13个stage,如下图所示:</p><p><img src="https://i.loli.net/2018/10/11/5bbea4852f2a3.png" alt="stage.png"></p><p>虽然有13个,但是总共可以分为4个阶段,分别对应kmeans实现的4个阶段</p><h2 id="spark中kmeans实现"><a href="#spark中kmeans实现" class="headerlink" title="spark中kmeans实现"></a>spark中kmeans实现</h2><p>本次导入的库为spark中ml包。实际上ml中的kmeans只是对mllib中kmeans的封装,mllib.KMeans的接口是基于RDD的,而ml.KMeans的接口是基于DataFrame的。所以在调用ml.KMeans的fit( )函数后,内部其实是对DataFrame进行转换,转换为RDD形式,再将数据作为输入对mllib.KMeans进行调用从而训练模型,训练完后再返回给ml.KMeans。</p><p>mllib.KMeans中,在选取初始点时,实际上默认的算法采用的是KMeans||算法(算法的lineage是:普通kmeans -> kmeans++ -> KMeans||)。其实最主要的不同之处在于选取初始质心的策略上。经典的Kmeans算法的缺点在于需要预先指定k值以及对初始选取的质心比较敏感。为了解决该问题提出了<a href="https://en.wikipedia.org/wiki/K-means++" target="_blank" rel="noopener">kmeans++算法</a>,对于质心的选择进行了改变,但是问题在于算法必须顺序执行,无法并行扩展。针对此问题又提出了KMeans||算法,<a href="http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf" target="_blank" rel="noopener">论文在这里</a>。</p><h3 id="ml-KMeans"><a href="#ml-KMeans" class="headerlink" title="ml.KMeans"></a>ml.KMeans</h3><p>在ml中的kmens中,先将mllib中的kmeans包进行引入,同时为了避免和ml中原有的kmeans类混淆,重新命名为MLlibKMeans,MLlibKMeansModel。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> org.apache.spark.mllib.clustering.{<span class="type">DistanceMeasure</span>, <span class="type">KMeans</span> => <span class="type">MLlibKMeans</span>, <span class="type">KMeansModel</span> => <span class="type">MLlibKMeansModel</span>}</span><br></pre></td></tr></table></figure><p>函数的训练入口是fit()函数,从这里开始,并且先将DataFrame转化为rdd形式</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">override</span> <span class="function"><span class="keyword">def</span> <span class="title">fit</span></span>(dataset: <span class="type">Dataset</span>[_]): <span class="type">KMeansModel</span> = instrumented { instr =></span><br><span class="line"> transformSchema(dataset.schema, logging = <span class="literal">true</span>)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">val</span> handlePersistence = dataset.storageLevel == <span class="type">StorageLevel</span>.<span class="type">NONE</span></span><br><span class="line"> <span class="keyword">val</span> instances = <span class="type">DatasetUtils</span>.columnToOldVector(dataset, getFeaturesCol)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> (handlePersistence) {</span><br><span class="line"> instances.persist(<span class="type">StorageLevel</span>.<span class="type">MEMORY_AND_DISK</span>)</span><br><span class="line"> } <span class="comment">//将rdd的存储等级设置为StorageLevel.MEMORY_AND_DISK</span></span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 不知道在干什么。。。是调用的ml库自己的instrument方法(没影响)</span></span><br><span class="line"> instr.logPipelineStage(<span class="keyword">this</span>)</span><br><span class="line"> instr.logDataset(dataset)</span><br><span class="line"> instr.logParams(<span class="keyword">this</span>, featuresCol, predictionCol, k, initMode, initSteps, distanceMeasure,maxIter, seed, tol)</span><br></pre></td></tr></table></figure><p>然后将ml中自己的kmeans模型参数送入mllib的模型中,命名为algo.</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> algo = <span class="keyword">new</span> <span class="type">MLlibKMeans</span>()</span><br><span class="line"> .setK($(k))</span><br><span class="line"> .setInitializationMode($(initMode))</span><br><span class="line"> .setInitializationSteps($(initSteps))</span><br><span class="line"> .setMaxIterations($(maxIter))</span><br><span class="line"> .setSeed($(seed))</span><br><span class="line"> .setEpsilon($(tol))</span><br><span class="line"> .setDistanceMeasure($(distanceMeasure))</span><br></pre></td></tr></table></figure><p>调用mllib中kmeans的run()函数,进行运算。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> parentModel = algo.run(instances, <span class="type">Option</span>(instr))</span><br></pre></td></tr></table></figure><p>此时进入mllib中的kmeans模型实现函数。</p><h3 id="mllib-KMeans"><a href="#mllib-KMeans" class="headerlink" title="mllib.KMeans"></a>mllib.KMeans</h3><p>在mllib.kmeans中,执行的大致顺序如下:</p><ol><li>将rdd中的point变为(point,norm)形式。其中point的存储形式是vector,norm是二范数,即point的向量模。有了模之后方便以后计算各个点之间的距离。</li><li>用initRandom或者initKMeansParallel方法进行对初始中心点的选择。其中默认的方式是initKmeansParallel方法,也就是KMeans||算法</li><li>在初始的中心点选择好后,进行对模型的收敛计算,直到达到允许的误差值内或者达到最大迭代计算次数。</li><li>返回模型</li></ol><h4 id="一-rdd中point转换"><a href="#一-rdd中point转换" class="headerlink" title="一. rdd中point转换"></a>一. rdd中point转换</h4><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">private</span>[spark] <span class="function"><span class="keyword">def</span> <span class="title">run</span></span>(</span><br><span class="line"> data: <span class="type">RDD</span>[<span class="type">Vector</span>],</span><br><span class="line"> instr: <span class="type">Option</span>[<span class="type">Instrumentation</span>]): <span class="type">KMeansModel</span> = {</span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> (data.getStorageLevel == <span class="type">StorageLevel</span>.<span class="type">NONE</span>) {</span><br><span class="line"> logWarning(<span class="string">"The input data is not directly cached, which may hurt performance if its"</span></span><br><span class="line"> + <span class="string">" parent RDDs are also uncached."</span>)</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 计算模并且缓存下来</span></span><br><span class="line"> <span class="keyword">val</span> norms = data.map(<span class="type">Vectors</span>.norm(_, <span class="number">2.0</span>))</span><br><span class="line"> norms.persist()</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 将模与原来的rdd中的点zip在一起</span></span><br><span class="line"> <span class="keyword">val</span> zippedData = data.zip(norms).map { <span class="keyword">case</span> (v, norm) =></span><br><span class="line"> <span class="keyword">new</span> <span class="type">VectorWithNorm</span>(v, norm)</span><br><span class="line"> }</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 调用runAlgorithm()进行计算</span></span><br><span class="line"> <span class="keyword">val</span> model = runAlgorithm(zippedData, instr)</span><br></pre></td></tr></table></figure><h4 id="二-初始化中心点"><a href="#二-初始化中心点" class="headerlink" title="二. 初始化中心点"></a>二. 初始化中心点</h4><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">private</span> <span class="function"><span class="keyword">def</span> <span class="title">runAlgorithm</span></span>(</span><br><span class="line"> data: <span class="type">RDD</span>[<span class="type">VectorWithNorm</span>],</span><br><span class="line"> instr: <span class="type">Option</span>[<span class="type">Instrumentation</span>]): <span class="type">KMeansModel</span> = {</span><br><span class="line"></span><br><span class="line"> <span class="keyword">val</span> sc = data.sparkContext</span><br><span class="line"></span><br><span class="line"> <span class="keyword">val</span> initStartTime = <span class="type">System</span>.nanoTime()</span><br><span class="line"></span><br><span class="line"> <span class="keyword">val</span> distanceMeasureInstance = <span class="type">DistanceMeasure</span>.decodeFromString(<span class="keyword">this</span>.distanceMeasure)</span><br><span class="line"></span><br><span class="line"> <span class="comment">//在这里可以看出,除非指明初始化的方法为initRandom,否则默认为initKmeansParallel</span></span><br><span class="line"> <span class="keyword">val</span> centers = initialModel <span class="keyword">match</span> {</span><br><span class="line"> <span class="keyword">case</span> <span class="type">Some</span>(kMeansCenters) =></span><br><span class="line"> kMeansCenters.clusterCenters.map(<span class="keyword">new</span> <span class="type">VectorWithNorm</span>(_))</span><br><span class="line"> <span class="keyword">case</span> <span class="type">None</span> =></span><br><span class="line"> <span class="keyword">if</span> (initializationMode == <span class="type">KMeans</span>.<span class="type">RANDOM</span>) {</span><br><span class="line"> initRandom(data)</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> initKMeansParallel(data, distanceMeasureInstance)</span><br><span class="line"> }</span><br><span class="line"> }</span><br></pre></td></tr></table></figure><p>下面进入initKMeansParallel方法:</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">private</span>[clustering] <span class="function"><span class="keyword">def</span> <span class="title">initKMeansParallel</span></span>(data: <span class="type">RDD</span>[<span class="type">VectorWithNorm</span>],</span><br><span class="line"> distanceMeasureInstance: <span class="type">DistanceMeasure</span>): <span class="type">Array</span>[<span class="type">VectorWithNorm</span>] = {</span><br><span class="line"> <span class="comment">// 初始化costs</span></span><br><span class="line"> <span class="keyword">var</span> costs = data.map(_ => <span class="type">Double</span>.<span class="type">PositiveInfinity</span>)</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 在rdd中随机选一个点</span></span><br><span class="line"> <span class="keyword">val</span> seed = <span class="keyword">new</span> <span class="type">XORShiftRandom</span>(<span class="keyword">this</span>.seed).nextInt()</span><br><span class="line"> <span class="keyword">val</span> sample = data.takeSample(<span class="literal">false</span>, <span class="number">1</span>, seed)</span><br><span class="line"> </span><br><span class="line"> require(sample.nonEmpty, <span class="string">s"No samples available from <span class="subst">$data</span>"</span>)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 将随机选的那一个点作为第一个中心点</span></span><br><span class="line"> <span class="keyword">val</span> centers = <span class="type">ArrayBuffer</span>[<span class="type">VectorWithNorm</span>]()</span><br><span class="line"> <span class="keyword">var</span> newCenters = <span class="type">Seq</span>(sample.head.toDense)</span><br><span class="line"> centers ++= newCenters</span><br></pre></td></tr></table></figure><p>takesample此时发生了rdd的计算,这时候的过程对应于stage 0 和 stage 1。</p><p>接下来通过多次循环计算,取得所有的初始化中心点。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> step = <span class="number">0</span></span><br><span class="line"></span><br><span class="line"><span class="comment">//用来存储每次产生的中心点,并且是broadcast类型</span></span><br><span class="line"><span class="keyword">val</span> bcNewCentersList = <span class="type">ArrayBuffer</span>[<span class="type">Broadcast</span>[_]]()</span><br><span class="line"><span class="keyword">while</span> (step < initializationSteps) {</span><br><span class="line"> <span class="comment">// 每次把上一次算出来的newCenters广播出去</span></span><br><span class="line"> <span class="keyword">val</span> bcNewCenters = data.context.broadcast(newCenters)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 把新算出来的点加到里面</span></span><br><span class="line"> bcNewCentersList += bcNewCenters</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 计算data里面点的cost值</span></span><br><span class="line"> <span class="keyword">val</span> preCosts = costs</span><br><span class="line"> costs = data.zip(preCosts).map { <span class="keyword">case</span> (point, cost) =></span><br><span class="line"> math.min(distanceMeasureInstance.pointCost(bcNewCenters.value, point), cost)</span><br><span class="line"> }.persist(<span class="type">StorageLevel</span>.<span class="type">MEMORY_AND_DISK</span>)</span><br><span class="line"> <span class="keyword">val</span> sumCosts = costs.sum()</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 将broadcast变量销毁</span></span><br><span class="line"> bcNewCenters.unpersist(blocking = <span class="literal">false</span>)</span><br><span class="line"> preCosts.unpersist(blocking = <span class="literal">false</span>)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 开始选点,每次循环根据距中心点的距离成比例地选取 2 * k 个点</span></span><br><span class="line"> <span class="keyword">val</span> chosen = data.zip(costs).mapPartitionsWithIndex { (index, pointCosts) => <span class="keyword">val</span> rand = <span class="keyword">new</span> <span class="type">XORShiftRandom</span>(seed ^ (step << <span class="number">16</span>) ^ index)</span><br><span class="line"> pointCosts.filter { <span class="keyword">case</span> (_, c) => rand.nextDouble() < <span class="number">2.0</span> * c * k / sumCosts }.map(_._1)</span><br><span class="line"> }.collect()</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 把新选择出来地点变为dense格式,命名为newCenters</span></span><br><span class="line"> newCenters = chosen.map(_.toDense)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 把新的center放入到centers里面</span></span><br><span class="line"> centers ++= newCenters</span><br><span class="line"> step += <span class="number">1</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>对循环得到的centers处理一下,先是转换为vector形式,去重,再转换为VectorWithNorm格式。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> distinctCenters = centers.map(_.vector).distinct.map(<span class="keyword">new</span> <span class="type">VectorWithNorm</span>(_))</span><br></pre></td></tr></table></figure><p>如果找出来的centers比k多,通过LocalKMeans筛检出k个中心点。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">if</span> (distinctCenters.size <= k) {</span><br><span class="line"> distinctCenters.toArray</span><br><span class="line"> } <span class="keyword">else</span> {</span><br><span class="line"> <span class="keyword">val</span> bcCenters = data.context.broadcast(distinctCenters)</span><br><span class="line"> <span class="keyword">val</span> countMap = data</span><br><span class="line"> .map(distanceMeasureInstance.findClosest(bcCenters.value, _)._1)</span><br><span class="line"> .countByValue()</span><br><span class="line"></span><br><span class="line"> bcCenters.destroy(blocking = <span class="literal">false</span>)</span><br><span class="line"></span><br><span class="line"> <span class="keyword">val</span> myWeights = distinctCenters.indices.map(countMap.getOrElse(_, <span class="number">0</span>L).toDouble).toArray</span><br><span class="line"> <span class="type">LocalKMeans</span>.kMeansPlusPlus(<span class="number">0</span>, distinctCenters.toArray, myWeights, k, <span class="number">30</span>)</span><br><span class="line"> }</span><br></pre></td></tr></table></figure><h4 id="三-对模型进行收敛计算"><a href="#三-对模型进行收敛计算" class="headerlink" title="三. 对模型进行收敛计算"></a>三. 对模型进行收敛计算</h4><p>初始化选点做完后,将中心点存入到centers中,进行收敛计算,找出最后收敛的点的集合。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br><span class="line">45</span><br><span class="line">46</span><br><span class="line">47</span><br><span class="line">48</span><br><span class="line">49</span><br><span class="line">50</span><br><span class="line">51</span><br><span class="line">52</span><br><span class="line">53</span><br><span class="line">54</span><br><span class="line">55</span><br><span class="line">56</span><br><span class="line">57</span><br><span class="line">58</span><br><span class="line">59</span><br><span class="line">60</span><br><span class="line">61</span><br><span class="line">62</span><br><span class="line">63</span><br><span class="line">64</span><br><span class="line">65</span><br><span class="line">66</span><br><span class="line">67</span><br><span class="line">68</span><br><span class="line">69</span><br><span class="line">70</span><br><span class="line">71</span><br><span class="line">72</span><br><span class="line">73</span><br><span class="line">74</span><br><span class="line">75</span><br><span class="line">76</span><br><span class="line">77</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">while</span> (iteration < maxIterations && !converged) {</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 用到了累加器,用来记录计算过程中整体的cost值。该变量只能通过关联操作进行“加”运算,并且在各个worker上进行同步</span></span><br><span class="line"> <span class="keyword">val</span> costAccum = sc.doubleAccumulator</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 将中心点的集合通过broadcast广播出去,在每个worker上都有一份该缓存,并且为只读</span></span><br><span class="line"> <span class="keyword">val</span> bcCenters = sc.broadcast(centers)</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 对rdd中的各个partition做操作,分别找见各个partition中点的聚类中心</span></span><br><span class="line"> <span class="keyword">val</span> newCenters = data.mapPartitions { points =></span><br><span class="line"> <span class="comment">// 读取中心点的数值和相关维度</span></span><br><span class="line"> <span class="keyword">val</span> thisCenters = bcCenters.value</span><br><span class="line"> <span class="keyword">val</span> dims = thisCenters.head.vector.size</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 初始化数组,第一个sums数组用来存储各个中心点中的全部点的向量和</span></span><br><span class="line"> <span class="keyword">val</span> sums = <span class="type">Array</span>.fill(thisCenters.length)(<span class="type">Vectors</span>.zeros(dims))</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 第二个counts用来记录每个中心点的点簇的数量</span></span><br><span class="line"> <span class="keyword">val</span> counts = <span class="type">Array</span>.fill(thisCenters.length)(<span class="number">0</span>L)</span><br><span class="line"></span><br><span class="line"> <span class="comment">//对每个partition中的各个点做以下操作</span></span><br><span class="line"> points.foreach { point =></span><br><span class="line"> <span class="comment">// 各个点与每个中心点算距离,返回其中距离最小的。其中bestCenter是中心点在centers中的index,cost是两点之间的距离</span></span><br><span class="line"> <span class="keyword">val</span> (bestCenter, cost) = distanceMeasureInstance.findClosest(thisCenters, point)</span><br><span class="line"> </span><br><span class="line"> <span class="comment">// 在全局上将cost进行累加</span></span><br><span class="line"> costAccum.add(cost)</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 把这个点与所属中心点的向量和存储到sums里面 </span></span><br><span class="line"> distanceMeasureInstance.updateClusterSum(point, sums(bestCenter))</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 该中心点下的点的个数加1</span></span><br><span class="line"> counts(bestCenter) += <span class="number">1</span></span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 在对每个点做完以上操作后,每个partition中的点对应的中心点及其cost也都计算出来了</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">// 对counts中大于0的进行筛选,返回index (等于0说明该中心点下没有对应的点,自然要删掉)</span></span><br><span class="line"> <span class="comment">// 返回的index形成了一个list,调用map语句对list中的每个index做一层包裹,形成 (index, (sum(index), counts(index)) 的形式</span></span><br><span class="line"> <span class="comment">// 因为mappartition要返回iterator类型,所以在后面加一个iterator</span></span><br><span class="line"> counts.indices.filter(counts(_) > <span class="number">0</span>).map(j => (j, (sums(j), counts(j)))).iterator</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 下面的reduceByKey对同一个index(也就是同一个中心点)中的数据进行聚合 (因为数据分散在各个worker上)</span></span><br><span class="line"> }.reduceByKey { <span class="keyword">case</span> ((sum1, count1), (sum2, count2)) =></span><br><span class="line"> <span class="comment">// 对于相同的index,其中的值做以下操作</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">// 将sum2累加到sum1中</span></span><br><span class="line"> axpy(<span class="number">1.0</span>, sum2, sum1)</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 将count2累加到count1上</span></span><br><span class="line"> (sum1, count1 + count2)</span><br><span class="line"></span><br><span class="line"> <span class="comment">// collectAsMap()将所有聚合后的数据送入到driver端,让driver进行下一步操作</span></span><br><span class="line"> <span class="comment">// mapValues只对数据的value字段进行map操作,从(sum, count)信息中重新计算中心点 (数据是k-v,形式为(index, (sum(index), counts(index)))</span></span><br><span class="line"> }.collectAsMap().mapValues { <span class="keyword">case</span> (sum, count) =></span><br><span class="line"> distanceMeasureInstance.centroid(sum, count)</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"></span><br><span class="line"> <span class="comment">// 以上做完后就把新的中心点存入到了newCenters中</span></span><br><span class="line"></span><br><span class="line"> <span class="comment">// 销毁掉之前centers这个广播变量</span></span><br><span class="line"> bcCenters.destroy(blocking = <span class="literal">false</span>)</span><br><span class="line"></span><br><span class="line"> <span class="comment">// 重新进行计算,看看有没有收敛。要是没有收敛了就继续算</span></span><br><span class="line"> converged = <span class="literal">true</span></span><br><span class="line"> newCenters.foreach { <span class="keyword">case</span> (j, newCenter) =></span><br><span class="line"> <span class="keyword">if</span> (converged &&</span><br><span class="line"> !distanceMeasureInstance.isCenterConverged(centers(j), newCenter, epsilon)) {</span><br><span class="line"> converged = <span class="literal">false</span></span><br><span class="line"> }</span><br><span class="line"> centers(j) = newCenter</span><br><span class="line"> }</span><br><span class="line"></span><br><span class="line"> cost = costAccum.value</span><br><span class="line"> iteration += <span class="number">1</span></span><br><span class="line"> }</span><br></pre></td></tr></table></figure><p>剩下的代码就是输出一些log信息,最后返回kmeans模型。</p><h2 id="总结"><a href="#总结" class="headerlink" title="总结"></a>总结</h2><ol><li><p>分布式的计算体现在哪里?</p><p>对模型进行收敛计算中,体现分布式的地方有两点:</p><ul><li><p>循环开始时:</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">val</span> costAccum = sc.doubleAccumulator</span><br><span class="line"><span class="keyword">val</span> bcCenters = sc.broadcast(centers)</span><br></pre></td></tr></table></figure><p>第一个costAccum是spark的累加器的使用,它在各个worker中间进行同步。它的值仅能够通过“加”改变,所以常常被用来计数,并且只有driver能够读取它的值。在kmeans中用来记录全局的cost值。</p><p>第二个bcCenters是广播变量。driver将中心点的集合通过broadcast广播出去,于是bcCenters在每个worker上都有一份缓存,并且为只读变量。</p></li><li><p>对中心点进行reduceByKey操作后,调用collectAsMap。这个collectAsMap的文档解释是:”Return the key-value pairs in this RDD to the <strong>master</strong> as a Map”。也就是说reduceByKey在reducer端做完后,将数据通过调用collectAsMap送入到driver端中,让driver进行接下来的运算。</p></li></ul></li><li><p>在进行reduceByKey操作时,reducer端有几个?和什么有关系?</p><p>大概说一下我的理解:</p><ol><li><p>在mapreduce里,reducer的number是很重要的(显式指定?)。</p></li><li><p>而在spark中,在reducebykey时会发生shuffle,此时比较重要的是看子阶段rdd的partition个数(因为意味着数据会分在几个partition里面)。如果这个partition个数在reducebykey函数里面没有指定,则取决于partitioner中的partition个数。默认的实现是直接取spark.default.parallelism这个配置项的值作为分区数的,如果没有配置,则以RDD(即map的最后一个RDD)的分区数为准。</p><p>所以在reducebykey的时候,是没有reducer端的,而是在各个partition端作sort,数据分散在该例子中的两个partition中。在此之后通过collectAsMap()将数据汇集在driver端,由driver进行之后的操作。</p></li></ol></li></ol>]]></content>
<summary type="html">
<h1 id="KMeans"><a href="#KMeans" class="headerlink" title="KMeans"></a>KMeans</h1><h2 id="例子代码"><a href="#例子代码" class="headerlink" title="例
</summary>
<category term="Spark" scheme="http://chzhou.cc/tags/Spark/"/>
</entry>
<entry>
<title>SGX Batcher's sort</title>
<link href="http://chzhou.cc/2018/08/31/SGX%20Batcher%E2%80%98s%20sort/"/>
<id>http://chzhou.cc/2018/08/31/SGX Batcher‘s sort/</id>
<published>2018-08-31T05:30:46.000Z</published>
<updated>2019-03-18T14:20:17.330Z</updated>
<content type="html"><![CDATA[<h1 id="SGX-enclave"><a href="#SGX-enclave" class="headerlink" title="SGX enclave"></a>SGX enclave</h1><h2 id="一-目标"><a href="#一-目标" class="headerlink" title="一. 目标"></a>一. 目标</h2><p>通过在SGX中实现 batcher‘s sort,进行 enclave runtime 测试</p><h2 id="二-难点"><a href="#二-难点" class="headerlink" title="二. 难点"></a>二. 难点</h2><ol><li><p>整个 enclave 的程序逻辑是什么?</p></li><li><p>enclave 如何与 不信任区(uRTS) 的data 做交互?</p></li><li><p>(疑问)</p><p>data进入enclave时应该为加密状态,再由enclave解密。data在 uRTS 中就应为加密状态,那么谁来给data加密?外部函数还是sgx?</p><ul><li><p>如果是外部函数,因为其在不信任区,有风险</p></li><li><p>如果是SGX来加密,那么是如何来操作的?</p></li></ul></li></ol><h2 id="三-流程"><a href="#三-流程" class="headerlink" title="三. 流程"></a>三. 流程</h2><ol><li>在SGX内部实现batcher’s sort 算法</li><li>准备数据,确定SGX如何读写数据</li><li>根据数据修改算法接口</li><li>进行测试</li></ol><h2 id="四-技术点"><a href="#四-技术点" class="headerlink" title="四. 技术点"></a>四. 技术点</h2><ol><li><p>batcher sort<br><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Batcher_Odd-Even_Mergesort_for_eight_inputs.svg/356px-Batcher_Odd-Even_Mergesort_for_eight_inputs.svg.png" alt="batcher.jpg"></p><p><img src="https://i.loli.net/2018/08/31/5b88cd1ddf834.jpg" alt="batcher.jpg"></p></li></ol><ul><li>Batcher排序网络是由一系列Batcher比较器(Batcher’s Comparator)组成的。Batcher比较器是指在两个输入端给定输入x,y,再在两个输出端输出最大值max{x,y}和最小值min{x,y}。</li><li>长度为2的倍数。</li><li>data-independent</li></ul><ol start="2"><li><p>SGX文件结构</p><ol><li><p>模块</p><ul><li><p>Untrusted Run-Time System (uRTS) – code that executes outside of the Intel SGX<br>enclave environment and performs functions such as:</p><ul><li><p>Loading and managing an enclave</p></li><li><p>Making calls to an enclave and receiving calls from within an enclave</p></li></ul></li><li><p>Trusted Run-Time System (tRTS) – code that executes within an Intel SGX enclave envir-<br> onment and performs functions such as:</p><ul><li><p>Receiving calls into the enclave and making calls outside of an enclave</p></li><li><p>Managing the enclave itself</p></li><li><p>Standard C/C++ libraries and run-time environment</p></li></ul></li><li>Edge Routines – functions that may run outside the enclave (untrusted edge routines) or inside the enclave (trusted edge routines) and serve to bind a call from the applic-ation with a function inside the enclave or a call from the enclave with a function in the application</li><li>3rd Party Libraries – for the purpose of this document, this is any library that has been tailored to work inside the Intel SGX enclave environment</li></ul></li><li><p>两个术语:</p><ul><li>ECall:“Enclave Call” a call made into an interface function within the enclave</li><li>OCall: “Out Call” a call made from within the enclave to the outside application</li></ul></li><li><p>实际文件结构</p><ul><li><p>./App </p><p>该文件夹存放应用程序中的<strong>不可信</strong>代码部分</p><ul><li>App.cpp文件:该文件是应用程序中的不可信部分代码,其中包括了创建Enclave及销毁Enclave的代码,也定义了一些相关的返回码供使用者查看Enclave程序的执行状态。其中的main函数是整个项目的入口函数。</li></ul></li><li><p>./Enclave</p><p>该文件夹存放应用程序中的可信代码部分和可信与不可信代码接口文件</p><ul><li>Enclave.config.xml文件:该文件是Enclave的配置文件,定义了Enclave中stack,heap等大小信息</li><li>Enclave.cpp文件:该文件是应用程序中的可信部分代码,包括了可信函数的实现</li><li>Enclave.edl文件:该文件是Enclave的接口定义文件,定义了不可信代码调用可信代码的ECALL函数接口和可信代码调用不可信代码的OECALL函数接口</li></ul></li></ul></li></ol></li></ol>]]></content>
<summary type="html">
<h1 id="SGX-enclave"><a href="#SGX-enclave" class="headerlink" title="SGX enclave"></a>SGX enclave</h1><h2 id="一-目标"><a href="#一-目标" class="
</summary>
<category term="SGX" scheme="http://chzhou.cc/tags/SGX/"/>
</entry>
<entry>
<title>Enclave文件结构</title>
<link href="http://chzhou.cc/2018/08/21/Enclave%E6%96%87%E4%BB%B6%E7%BB%93%E6%9E%84/"/>
<id>http://chzhou.cc/2018/08/21/Enclave文件结构/</id>
<published>2018-08-21T01:51:07.000Z</published>
<updated>2019-03-18T14:19:05.058Z</updated>
<content type="html"><![CDATA[<h1 id="Enclave文件结构"><a href="#Enclave文件结构" class="headerlink" title="Enclave文件结构"></a>Enclave文件结构</h1><p>大致总结一下Enclave的程序文件结构,参考的结构是Ubuntu 16.04 Desktop Intel SGX Linux 2.2 Release中的SampleCode文件。</p><p>这里以文件中SampleEnclave文件目录为例。</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br><span class="line">24</span><br><span class="line">25</span><br><span class="line">26</span><br><span class="line">27</span><br><span class="line">28</span><br><span class="line">29</span><br><span class="line">30</span><br><span class="line">31</span><br><span class="line">32</span><br><span class="line">33</span><br><span class="line">34</span><br><span class="line">35</span><br><span class="line">36</span><br><span class="line">37</span><br><span class="line">38</span><br><span class="line">39</span><br><span class="line">40</span><br><span class="line">41</span><br><span class="line">42</span><br><span class="line">43</span><br><span class="line">44</span><br></pre></td><td class="code"><pre><span class="line">.</span><br><span class="line">├── App</span><br><span class="line">│ ├── App.cpp</span><br><span class="line">│ ├── App.h</span><br><span class="line">│ ├── Edger8rSyntax</span><br><span class="line">│ │ ├── Arrays.cpp</span><br><span class="line">│ │ ├── Functions.cpp</span><br><span class="line">│ │ ├── Pointers.cpp</span><br><span class="line">│ │ └── Types.cpp</span><br><span class="line">│ └── TrustedLibrary</span><br><span class="line">│ ├── Libc.cpp</span><br><span class="line">│ ├── Libcxx.cpp</span><br><span class="line">│ └── Thread.cpp</span><br><span class="line">├── Enclave</span><br><span class="line">│ ├── config.01.xml</span><br><span class="line">│ ├── config.02.xml</span><br><span class="line">│ ├── config.03.xml</span><br><span class="line">│ ├── config.04.xml</span><br><span class="line">│ ├── Edger8rSyntax</span><br><span class="line">│ │ ├── Arrays.cpp</span><br><span class="line">│ │ ├── Arrays.edl</span><br><span class="line">│ │ ├── Functions.cpp</span><br><span class="line">│ │ ├── Functions.edl</span><br><span class="line">│ │ ├── Pointers.cpp</span><br><span class="line">│ │ ├── Pointers.edl</span><br><span class="line">│ │ ├── Types.cpp</span><br><span class="line">│ │ └── Types.edl</span><br><span class="line">│ ├── Enclave.config.xml</span><br><span class="line">│ ├── Enclave.cpp</span><br><span class="line">│ ├── Enclave.edl</span><br><span class="line">│ ├── Enclave.h</span><br><span class="line">│ ├── Enclave.lds</span><br><span class="line">│ ├── Enclave_private.pem</span><br><span class="line">│ └── TrustedLibrary</span><br><span class="line">│ ├── Libc.cpp</span><br><span class="line">│ ├── Libc.edl</span><br><span class="line">│ ├── Libcxx.cpp</span><br><span class="line">│ ├── Libcxx.edl</span><br><span class="line">│ ├── Thread.cpp</span><br><span class="line">│ └── Thread.edl</span><br><span class="line">├── Include</span><br><span class="line">│ └── user_types.h</span><br><span class="line">├── Makefile</span><br><span class="line">└── README.txt</span><br></pre></td></tr></table></figure><ol><li><p>App文件</p><p>该文件夹存放应用程序中的<strong>不可信</strong>代码部分。</p><ul><li>App.cpp文件:该文件是应用程序中的不可信部分代码,其中包括了创建Enclave及销毁Enclave的代码,也定义了一些相关的返回码供使用者查看Enclave程序的执行状态。其中的main函数是整个项目的入口函数。</li><li>App.h文件:该文件是应用程序中的不可信部分代码的头文件,定义了一些宏常量和函数声明。</li><li>TrustedLibrary和Edger8rSyntax文件夹:提供函数库和工具</li></ul></li><li><p>Enclave文件夹</p><p>该文件夹存放应用程序中的<strong>可信代码</strong>部分和<strong>可信与不可信代码接口</strong>文件</p><ul><li>Enclave.config.xml文件:该文件是Enclave的配置文件,定义了Enclave的元数据信息</li><li>Enclave.cpp文件:该文件是应用程序中的可信部分代码,包括了可信函数的实现</li><li>Enclave.h文件:该文件是应用程序中的可信部分代码的头文件,定义了一些宏常量和函数声明</li><li>Enclave.edl文件:该文件是Enclave的接口定义文件,定义了不可信代码调用可信代码的ECALL函数接口和可信代码调用不可信代码的OECALL函数接口</li><li>Enclave.lds文件:该文件定义了一些Enclave可执行文件信息</li><li>Enclave_private.pem文件:该文件是SGX生成的私钥</li></ul></li><li><p>Include文件夹</p><p>该文件夹存放被Enclave接口定义文件Enclave.edl使用的头文件,包括一些宏定义。</p></li><li><p>Makefile文件</p><p>该文件是项目的编译文件,定义了项目的编译信息</p></li></ol><p>在编译后,会生成名为 ‘app’ 的可执行文件。</p><p>SampleCode文件下的其他文件大同小异,结构都差不多。</p>]]></content>
<summary type="html">
<h1 id="Enclave文件结构"><a href="#Enclave文件结构" class="headerlink" title="Enclave文件结构"></a>Enclave文件结构</h1><p>大致总结一下Enclave的程序文件结构,参考的结构是Ubuntu
</summary>
<category term="SGX" scheme="http://chzhou.cc/tags/SGX/"/>
</entry>
<entry>
<title>SGX Q&A</title>
<link href="http://chzhou.cc/2018/08/13/SGX%20Q&A/"/>
<id>http://chzhou.cc/2018/08/13/SGX Q&A/</id>
<published>2018-08-13T13:14:43.000Z</published>
<updated>2019-03-18T14:21:07.161Z</updated>
<content type="html"><![CDATA[<ol><li><p>EPC是什么?物理位置在哪里?</p><ul><li>EPC:Enclave Page Cache.</li><li>The contents of enclaves and the associated data structures are stored in the Enclave Page Cache (EPC), which is a subset of <strong>DRAM</strong>. (which is not on CPU and it is <strong>MAIN MEMORY</strong>)</li></ul><p><img src="https://insujang.github.io/assets/images/170403/epc.png" alt="EPC and PRM layout"></p><p>注:PRM:Processor Reserved Memory</p><p>(<a href="https://insujang.github.io/2017-04-03/intel-sgx-protection-mechanism/" target="_blank" rel="noopener">参考链接</a>)</p></li><li><p>谁对EPC有读取权限?</p><p>应用程序由可信部分和不可信部分构成。只有可信函数被调用,才能访问EPC。其他均被阻挡。<a href="https://software.intel.com/zh-cn/sgx/details" target="_blank" rel="noopener">参考链接</a></p><p><img src="https://software.intel.com/sites/default/files/managed/6f/ab/runtime-execution.png" alt></p></li><li><p>现在的EPC是多大?</p><ul><li>因为EPC在内存中,而内存又被多个其他进程使用,为了不产生冲突,经过Intel分析后将大小设置为定值.<a href="https://software.intel.com/en-us/forums/intel-software-guard-extensions-intel-sgx/topic/737218" target="_blank" rel="noopener">参考链接</a></li><li>对于Win:如果OEM支持PRMRR选项(没有查到PRMRR准确定义,大概就是个选项),那么可将大小设置为32 MB, 64 MB or 128 MB。BIOS中默认大小为 128 MB.<a href="https://software.intel.com/en-us/articles/getting-started-with-sgx-sdk-for-windows" target="_blank" rel="noopener">参考链接-官网</a></li><li>对于Linux:因为 Linux支持 paging 技术,而Win不支持。所以在Linux中可以突破Win的限制,在所参考的链接里大小最大达到了4G. <a href="https://software.intel.com/en-us/forums/intel-software-guard-extensions-intel-sgx/topic/670322#comment-1878875" target="_blank" rel="noopener">参考链接-时间为16年到17年初</a></li><li>在上一条的参考链接里,有条回答进行了总结:“The physical protected memory is limited to the PRMRR size set in BIOS and the max we support at this time is 128MB in Skylake. The reason why you are able to set the heapsize you set is because of the paging support in Linux driver and we don’t have this support in Windows at this time. Similar to how memory is managed in OS, enclave pages are managed similarly.”</li></ul></li><li><p>数据送到EPC中是加密的还是未加密的?</p><p>这个问题得到的资料比较混乱,现未有准确答案。只简单罗列搜索到的资料。</p><ul><li><p><a href="https://software.intel.com/zh-cn/sgx/details" target="_blank" rel="noopener">参考资料1</a></p><p>对于enclave(中文为“围圈”),所有进程数据均以明文形式可见;外部访问围圈数据被拒绝</p></li><li><p><a href="https://software.intel.com/zh-cn/videos/how-to-seal-data-in-intel-sgx" target="_blank" rel="noopener">参考资料2-页面下面文字稿选项第四段</a></p><p>该资料未明确提到在enclave中数据是不是加密的。但是提到将数据从enclave到不信任的内存中,需要进行sealing,即进行加密。所以从这个行为推测enclave是未加密的(否则如果是加密的话就不需要sealing了。不知道这样理解对不对)</p></li><li><p><a href="https://software.intel.com/en-us/forums/intel-software-guard-extensions-intel-sgx/topic/722444" target="_blank" rel="noopener">参考资料3</a></p><p>提问者从文档当中对数据是否加密产生了矛盾的结论。Intel的工程师回答”Data in EPC is encrypted and integrity protected “, 但之后又说”From the <strong>CPU standpoint</strong>, data in EPC is unencrypted, because the MEE sits transparently between the CPU and the PRM. In other words, the data in EPC is encrypted because it’s outside the CPU package. However, it doesn’t need to be this way. For instance, a CPU with special on-chip memory wouldn’t need the MEE and the EPC memory wouldn’t have to be encrypted.”</p></li></ul></li></ol><pre><code>对于这个问题我还需要再查一查。</code></pre><ol start="5"><li><p>对于EPC的R/W ops,OS是可以看见的?</p><p>(未找见相应资料。我推测如果操作是enclave与外界(memory)的话,肯定能被OS看见。在enclave内部的话,就看不见了)</p></li></ol>]]></content>
<summary type="html">
<ol>
<li><p>EPC是什么?物理位置在哪里?</p>
<ul>
<li>EPC:Enclave Page Cache.</li>
<li>The contents of enclaves and the associated data structures are st
</summary>
<category term="SGX" scheme="http://chzhou.cc/tags/SGX/"/>
</entry>
<entry>
<title>Spark Q&A</title>
<link href="http://chzhou.cc/2018/08/05/Spark%20Q&A/"/>
<id>http://chzhou.cc/2018/08/05/Spark Q&A/</id>
<published>2018-08-05T03:29:21.000Z</published>
<updated>2019-03-18T14:22:22.532Z</updated>
<content type="html"><![CDATA[<ol><li><p>Spark的shuffle类操作有哪些(除去groupbyKey)?这些操作是把partition 直接load到内存中吗?</p><blockquote><p>Operations which can cause a shuffle include <strong>repartition operations</strong> like repartition and coalesce, <strong>‘ByKey’ operations</strong> (except for counting) like groupByKey and reduceByKey, and <strong>join operations</strong> like cogroup and join <a href="https://spark.apache.org/docs/latest/rdd-programming-guide.html#background" target="_blank" rel="noopener">RDD doc</a></p></blockquote><ul><li>repartition operations: <code>repartition</code>, <code>coalesce</code></li><li>‘ByKey’ operations: <code>groupByKey</code>, <code>reduceByKey</code>, <code>aggregateByKey</code>, <code>sortByKey</code> (<code>countByKey</code> is an <em>Actions</em> operation, so it isn’t a <em>shuffle operation</em>)</li><li>join operations: <code>cogroup</code>, <code>join</code></li><li><p>Another operation which takes “numPartitions” as an argument is <code>distinct</code> operation</p><p>RDD is stored in memory by default. There are seven storage level. The full set of storage levels can be found <a href="https://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence" target="_blank" rel="noopener">here</a>. <em>MEMORY_ONLY</em> is the default level, which means when data can’t fit in memory, <strong>some partitions will not be cached</strong> and will be <strong>recomputed</strong> on the fly each time they’re needed. In <em>MEMORY_AND_DISK</em> level, if the RDD does not fit in memory, store the partitions that don’t fit on disk, and read them from there when they’re needed. Also, shuffle generates a large number of intermediate files on disk, these files are preserved until the corresponding RDDs are no longer used and are garbage collected. This is done so the shuffle files don’t need to be re-created if the lineage is re-computed.</p><p>(有个问题:刚开始的资料来自RDD doc,在这个shuffle类目<a href="https://spark.apache.org/docs/latest/rdd-programming-guide.html#performance-impact" target="_blank" rel="noopener">链接</a>的第三段最后一句,原文是说内存不够的话就会把tables存到disk中。这里说的只是针对’ByKey操作吗?(因为上文在说ByKey操作))</p></li></ul></li><li><p>Spark中所谓的lazy transformation触发条件有哪些?</p><blockquote><p>The transformations are only computed when an action requires a result to be returned to the driver program. <a href="https://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-operations" target="_blank" rel="noopener">RDD doc</a></p></blockquote><p> <a href="https://spark.apache.org/docs/latest/rdd-programming-guide.html#actions" target="_blank" rel="noopener">Actions操作</a></p></li><li><p>Spark core API</p><p> Spark Core提供Spark最基础与最核心的功能,主要包括以下功能:</p><ul><li>SparkContext:通常而言,Driver Application的执行与输出都是通过SparkContext来完成的。在正式提交Application之前,首先需要初始化SparkContext。SparkContext隐藏了网络通信、分布式部署、消息通信、存储能力、计算能力、缓存、测量系统、文件服务、Web服务等内容,应用程序开发者只需要使用SparkContext提供的API完成功能开发。SparkContext内置的DAGScheduler负责创建Job,将DAG中的RDD划分到不同的Stage,提交Stage等功能。内置的TaskScheduler负责资源的申请,任务的提交及请求集群对任务的调度等工作。 </li><li>存储体系:Spark优先考虑使用各节点的内存作为存储,当内存不足时才会考虑使用磁盘,这极大地减少了磁盘IO,提升了任务执行的效率,使得Spark适用于实时计算、流式计算等场景。此外,Spark还提供了以内存为中心的高容错的分布式文件系统Tachyon供用户进行选择。Tachyon能够为Spark提供可靠的内存级的文件共享服务。 </li><li>计算引擎:计算引擎由SparkContext中的DAGScheduler、RDD以及具体节点上的Executor负责执行的Map和Reduce任务组成。DAGScheduler和RDD虽然位于SparkContext内部,但是在任务正式提交与执行之前会将Job中的RDD组织成有向无环图(DAG),并对Stage进行划分,决定了任务执行阶段任务的数量、迭代计算、shuffle等过程。 </li><li>部署模式:由于单节点不足以提供足够的存储和计算能力,所以作为大数据处理的Spark在SparkContext的TaskScheduler组件中提供了对Standalone部署模式的实现和Yarn、Mesos等分布式资源管理系统的支持。通过使用Standalone、Yarn、Mesos等部署模式为Task分配计算资源,提高任务的并发执行效率。</li></ul></li><li><p>Where is RDD’s lineage stored? And how to get it?</p><blockquote><p>The RDD‘s lineage is stored in memory, same as RDD. And the RDD lineage lives on the driver where RDDs live. <a href="https://stackoverflow.com/questions/34713793/where-spark-rdd-lineage-is-stored" target="_blank" rel="noopener">stackoverflow</a></p></blockquote><p> To get lineage:</p><ol><li><p>Using <code>toDebugString</code> method, one can get RDD lineage graph. (<a href="https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-rdd-lineage.html#toDebugString" target="_blank" rel="noopener">参考</a>)</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> wordCount = sc.textFile(<span class="string">"README.md"</span>).flatMap(_.split(<span class="string">"\\s+"</span>)).map((_, <span class="number">1</span>)).reduceByKey(_ + _)</span><br><span class="line">wordCount: org.apache.spark.rdd.<span class="type">RDD</span>[(<span class="type">String</span>, <span class="type">Int</span>)] = <span class="type">ShuffledRDD</span>[<span class="number">21</span>] at reduceByKey at <console>:<span class="number">24</span></span><br><span class="line"></span><br><span class="line">scala> wordCount.toDebugString</span><br><span class="line">res13: <span class="type">String</span> =</span><br><span class="line">(<span class="number">2</span>) <span class="type">ShuffledRDD</span>[<span class="number">21</span>] at reduceByKey at <console>:<span class="number">24</span> []</span><br><span class="line"> +-(<span class="number">2</span>) <span class="type">MapPartitionsRDD</span>[<span class="number">20</span>] at map at <console>:<span class="number">24</span> []</span><br><span class="line"> | <span class="type">MapPartitionsRDD</span>[<span class="number">19</span>] at flatMap at <console>:<span class="number">24</span> []</span><br><span class="line"> | <span class="type">README</span>.md <span class="type">MapPartitionsRDD</span>[<span class="number">18</span>] at textFile at <console>:<span class="number">24</span> []</span><br><span class="line"> | <span class="type">README</span>.md <span class="type">HadoopRDD</span>[<span class="number">17</span>] at textFile at <console>:<span class="number">24</span> []</span><br></pre></td></tr></table></figure></li><li><p>In spark shell, with <em>spark.logLineage</em> property enabled , <code>toDebugString</code> is included when executing an action.</p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">./bin/spark-shell --conf spark.logLineage=true</span><br></pre></td></tr></table></figure></li><li><p>Using <code>spark-submit</code> </p><p>This section is still in progress…( Because using <code>--conf spark.logLineage=true</code>, the console doesn`t print the graph.)</p><p>And this is the <code>runjob</code> method’s souce code in SparkContext class.</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">runJob</span></span>[<span class="type">T</span>, <span class="type">U</span>: <span class="type">ClassTag</span>](</span><br><span class="line"> rdd: <span class="type">RDD</span>[<span class="type">T</span>],</span><br><span class="line"> func: (<span class="type">TaskContext</span>, <span class="type">Iterator</span>[<span class="type">T</span>]) => <span class="type">U</span>,</span><br><span class="line"> partitions: <span class="type">Seq</span>[<span class="type">Int</span>],</span><br><span class="line"> resultHandler: (<span class="type">Int</span>, <span class="type">U</span>) => <span class="type">Unit</span>): <span class="type">Unit</span> = {</span><br><span class="line"> <span class="keyword">if</span> (stopped.get()) {</span><br><span class="line"> <span class="keyword">throw</span> <span class="keyword">new</span> <span class="type">IllegalStateException</span>(<span class="string">"SparkContext has been shutdown"</span>)</span><br><span class="line"> }</span><br><span class="line"> <span class="keyword">val</span> callSite = getCallSite</span><br><span class="line"> <span class="keyword">val</span> cleanedFunc = clean(func)</span><br><span class="line"> logInfo(<span class="string">"Starting job: "</span> + callSite.shortForm)</span><br><span class="line"> <span class="keyword">if</span> (conf.getBoolean(<span class="string">"spark.logLineage"</span>, <span class="literal">false</span>)) {</span><br><span class="line"> logInfo(<span class="string">"RDD's recursive dependencies:\n"</span> + rdd.toDebugString)</span><br><span class="line"> }</span><br></pre></td></tr></table></figure></li></ol></li></ol>]]></content>
<summary type="html">
<ol>
<li><p>Spark的shuffle类操作有哪些(除去groupbyKey)?这些操作是把partition 直接load到内存中吗?</p>
<blockquote>
<p>Operations which can cause a shuffle include
</summary>
<category term="Spark" scheme="http://chzhou.cc/tags/Spark/"/>
</entry>
<entry>
<title>SparkML电影推荐流程分析</title>
<link href="http://chzhou.cc/2018/08/02/SparkML%E7%94%B5%E5%BD%B1%E6%8E%A8%E8%8D%90%E6%B5%81%E7%A8%8B%E5%88%86%E6%9E%90/"/>
<id>http://chzhou.cc/2018/08/02/SparkML电影推荐流程分析/</id>
<published>2018-08-02T09:59:46.000Z</published>
<updated>2019-03-18T14:34:36.342Z</updated>
<content type="html"><![CDATA[<h1 id="SparkML电影推荐流程分析"><a href="#SparkML电影推荐流程分析" class="headerlink" title="SparkML电影推荐流程分析"></a>SparkML电影推荐流程分析</h1><p>之前采用<code>spark-submit</code> 进行分析,产出的信息太多,很难缕清关系,难以得到每步产生的数据和操作过程。所以采用<code>spark-shell</code> 以一行一行输入的方式交互进行程序运行,同时从Web-UI上产生的信息进行同步分析。以下为每步操作:</p><p>为了打字方便,以下 Web-UI统一拿 web 代替。</p><ol><li><p>启动<code>spark-shell</code></p><figure class="highlight shell"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">chzhou@lotus02:/usr/spark/bin$ spark-shell --conf spark.logLineage=true --master spark://lotus02:7077 --deploy-mode client --jars file:///home/chzhou/mltr/machine-learning/target/scala-2.11/movielens-als_2.11-0.1.jar</span><br></pre></td></tr></table></figure><ul><li>操作的时候把程序用sbt打包的jar包导入,这样在shell里就可以调用原程序自定义的函数</li><li>指定master和deploy-mode,以分布式运行</li><li>使得<code>spark.logLineage</code> 为true,这样就能在控制台自动输出 RDD 的lineage</li></ul></li><li><p>导入库</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">import</span> java.io.<span class="type">File</span></span><br><span class="line"><span class="keyword">import</span> scala.io.<span class="type">Source</span></span><br><span class="line"><span class="keyword">import</span> org.apache.log4j.<span class="type">Logger</span></span><br><span class="line"><span class="keyword">import</span> org.apache.log4j.<span class="type">Level</span></span><br><span class="line"><span class="keyword">import</span> org.apache.spark.<span class="type">SparkConf</span></span><br><span class="line"><span class="keyword">import</span> org.apache.spark.<span class="type">SparkContext</span></span><br><span class="line"><span class="keyword">import</span> org.apache.spark.<span class="type">SparkContext</span>._</span><br><span class="line"><span class="keyword">import</span> org.apache.spark.rdd._</span><br><span class="line"><span class="keyword">import</span> org.apache.spark.mllib.recommendation.{<span class="type">ALS</span>, <span class="type">Rating</span>, <span class="type">MatrixFactorizationModel</span>}</span><br></pre></td></tr></table></figure></li><li><p>读取个人喜好的rating文件,并形成rdd</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> myRatings = <span class="type">MovieLensALS</span>.loadRatings(<span class="string">"/home/chzhou/ml-1m/personalRatings.txt"</span>)</span><br><span class="line">myRatings: <span class="type">Seq</span>[org.apache.spark.mllib.recommendation.<span class="type">Rating</span>] = <span class="type">Stream</span>(<span class="type">Rating</span>(<span class="number">0</span>,<span class="number">1</span>,<span class="number">2.0</span>), ?)</span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">val</span> myRatingsRDD = sc.parallelize(myRatings, <span class="number">1</span>).cache</span><br><span class="line">myRatingsRDD: org.apache.spark.rdd.<span class="type">RDD</span>[org.apache.spark.mllib.recommendation.<span class="type">Rating</span>] = <span class="type">ParallelCollectionRDD</span>[<span class="number">0</span>] at parallelize at <console>:<span class="number">39</span></span><br></pre></td></tr></table></figure><ul><li>第二句用<code>cache</code>将其存入内存,这样Web-UI中的Storage选项之后就可以查到RDD信息</li><li>此时Web-UI中还是全空的,没有任何信息,因为此时并没有Actions操作,并没有实际产生RDD</li></ul></li><li><p>在spark-shell中以多行输入的方式读取rating.dat文件</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br></pre></td><td class="code"><pre><span class="line">scala> :paste</span><br><span class="line"><span class="comment">// Entering paste mode (ctrl-D to finish)</span></span><br><span class="line"> <span class="keyword">val</span> ratings = sc.textFile(<span class="string">"hdfs://lotus02:9000/ml/medium/ratings.dat"</span>).map { line =></span><br><span class="line"> <span class="keyword">val</span> fields = line.split(<span class="string">"::"</span>)</span><br><span class="line"> <span class="comment">// format: (timestamp % 10, Rating(userId, movieId, rating))</span></span><br><span class="line"> (fields(<span class="number">3</span>).toLong % <span class="number">10</span>, <span class="type">Rating</span>(fields(<span class="number">0</span>).toInt, fields(<span class="number">1</span>).toInt, fields(<span class="number">2</span>).toDouble))</span><br><span class="line"> }.cache</span><br><span class="line"><span class="comment">// Exiting paste mode, now interpreting.</span></span><br><span class="line"></span><br><span class="line">ratings: org.apache.spark.rdd.<span class="type">RDD</span>[(<span class="type">Long</span>, org.apache.spark.mllib.recommendation.<span class="type">Rating</span>)] = <span class="type">MapPartitionsRDD</span>[<span class="number">3</span>] at map at <console>:<span class="number">37</span></span><br></pre></td></tr></table></figure></li><li><p>在spark-shell中以多行输入的方式读取movies.dat文件</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br></pre></td><td class="code"><pre><span class="line">scala> :paste</span><br><span class="line"><span class="comment">// Entering paste mode (ctrl-D to finish)</span></span><br><span class="line"><span class="keyword">val</span> movies = sc.textFile(<span class="string">"hdfs://lotus02:9000/ml/medium/movies.dat"</span>).map { line =></span><br><span class="line"> <span class="keyword">val</span> fields = line.split(<span class="string">"::"</span>)</span><br><span class="line"> <span class="comment">// format: (movieId, movieName)</span></span><br><span class="line"> (fields(<span class="number">0</span>).toInt, fields(<span class="number">1</span>))</span><br><span class="line"> }.cache</span><br><span class="line"></span><br><span class="line"><span class="comment">// Exiting paste mode, now interpreting.</span></span><br><span class="line"></span><br><span class="line">movies: org.apache.spark.rdd.<span class="type">RDD</span>[(<span class="type">Int</span>, <span class="type">String</span>)] = <span class="type">MapPartitionsRDD</span>[<span class="number">6</span>] at map at <console>:<span class="number">37</span></span><br><span class="line"></span><br><span class="line">scala> movies.collect().toMap</span><br><span class="line">res0: scala.collection.immutable.<span class="type">Map</span>[<span class="type">Int</span>,<span class="type">String</span>] = <span class="type">Map</span>(<span class="number">2163</span> -> <span class="type">Attack</span> of the <span class="type">Killer</span> <span class="type">Tomatoes</span>! (<span class="number">1980</span>), <span class="number">645</span> -> <span class="type">Nelly</span> & <span class="type">Monsieur</span> <span class="type">Arnaud</span> (<span class="number">1995</span>), <span class="number">892</span> -> <span class="type">Twelfth</span> <span class="type">Night</span> (<span class="number">1996</span>), <span class="number">69</span> -> <span class="type">Friday</span> (<span class="number">1995</span>), <span class="number">2199</span> -> <span class="type">Phoenix</span> (<span class="number">1998</span>), <span class="number">3021</span> -> <span class="type">Funhouse</span>, <span class="type">The</span> (<span class="number">1981</span>), <span class="number">1322</span> -> <span class="type">Amityville</span> <span class="number">1992</span>: <span class="type">It</span><span class="symbol">'s</span> <span class="type">About</span> <span class="type">Time</span> (<span class="number">1992</span>), <span class="number">1665</span> -> <span class="type">Bean</span> (<span class="number">1997</span>), <span class="number">1036</span> -> <span class="type">Die</span> <span class="type">Hard</span> (<span class="number">1988</span>), <span class="number">2822</span> -> <span class="type">Medicine</span> <span class="type">Man</span> (<span class="number">1992</span>), <span class="number">2630</span> -> <span class="type">Besieged</span> (<span class="type">L</span>' <span class="type">Assedio</span>) (<span class="number">1998</span>), <span class="number">3873</span> -> <span class="type">Cat</span> <span class="type">Ballou</span> (<span class="number">1965</span>), <span class="number">1586</span> -> <span class="type">G</span>.<span class="type">I</span>. <span class="type">Jane</span> (<span class="number">1997</span>), <span class="number">1501</span> -> <span class="type">Keys</span> to <span class="type">Tulsa</span> (<span class="number">1997</span>), <span class="number">2452</span> -> <span class="type">Gate</span> <span class="type">II</span>: <span class="type">Trespassers</span>, <span class="type">The</span> (<span class="number">1990</span>), <span class="number">809</span> -> <span class="type">Fled</span> (<span class="number">1996</span>), <span class="number">1879</span> -> <span class="type">Hanging</span> <span class="type">Garden</span>, <span class="type">The</span> (<span class="number">1997</span>), <span class="number">1337</span> -> <span class="type">Body</span> <span class="type">Snatcher</span>, <span class="type">The</span> (<span class="number">1945</span>), <span class="number">1718</span> -> <span class="type">Stranger</span> in the <span class="type">House</span> (<span class="number">1997</span>), <span class="number">2094</span> -> <span class="type">Rocketeer</span>, <span class="type">The</span> (<span class="number">1991</span>), <span class="number">3944</span> -> <span class="type">Bootmen</span> (<span class="number">2000</span>), <span class="number">1411</span> -> <span class="type">Hamlet</span> (<span class="number">1996</span>), <span class="number">629</span> -> <span class="type">Rude</span> (<span class="number">1995</span>), <span class="number">3883</span> -> <span class="type">Catfish</span> in <span class="type">Black</span> <span class="type">Bean</span> <span class="type">Sauce</span> (<span class="number">2.</span>.</span><br></pre></td></tr></table></figure><ul><li><p>这里改写了原程序,原程序是直接进行了collect.toMap操作,这里分成两步,先cache存到内存中,再进行colletc.toMap操作</p></li><li><p>因为进行了collect操作,此时web显示了信息</p><p><img src="/images/SparkML电影推荐流程分析/s0.PNG" alt="s0"><br>为Stage0信息,进行了map操作。(绿色代表在内存中)</p></li><li><p>在web中stage/Aggregated Metrics by Executor选项中,可以看到</p><p><img src="/images/SparkML电影推荐流程分析/s0 reco.PNG" alt="s0 reco"></p><p>图片有些小。。。简单来说,就是movies.dat中总共有3883条record,这里08机器存了1951条record,09上存了3883-1951=1932条数据。在这里看出了数据的分布。从这个方面也显示了上图中map操作的时候从movies.dat[4]和movies.dat[5]中获得数据。(但是看不出来movies.dat[4]是08还是09上)</p></li><li><p>附上storage界面信息</p><p><img src="/images/SparkML电影推荐流程分析/s0 stor.PNG" alt="s0 stor"></p></li></ul></li><li><p>对ratings进行统计计数,先是对numRatings计数</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> numRatings = ratings.count</span><br><span class="line">numRatings: <span class="type">Long</span> = <span class="number">1000209</span></span><br></pre></td></tr></table></figure><ul><li><p>输出ratings的数据总共有1000209条数据</p></li><li><p>从web上查看stage信息</p><p><img src="/images/SparkML电影推荐流程分析/s1.PNG" alt="s1"></p><p>显示cache的是MapPartitionsRDD[3],这与在第4步中的控制台输出是一样的。</p></li><li><p>查看slave存储</p><p><img src="/images/SparkML电影推荐流程分析/s1 rec.PNG" alt="s1 rec"></p><p>08上有503331条record,09上有1000209-503331=496878条数据</p></li><li><p>storage界面信息和上一步差不多,不截图显示了。</p></li></ul></li><li><p>接下来统计用户数</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> numUsers = ratings.map(_._2.user).distinct.count</span><br><span class="line">numUsers: <span class="type">Long</span> = <span class="number">6040</span></span><br></pre></td></tr></table></figure><p>这里的map语句不太懂(推测是对ratings的字段进行操作)。。。然后进行了distinct操作,找出unique的用户,然后count进行计数。这里得到有6040名用户</p><p>这里为Job2,Job2里有两个stage,第一个是distinct操作,第二个是count操作。</p><ul><li><p>第一个是distinct操作</p><p><img src="/images/SparkML电影推荐流程分析/j2s1.PNG" alt="j2s1"></p><p>这里用了之前cache过的RDD[3],然后进行map和distinct操作。</p><p>对于存储,不截图了,都一样,其中08上有3092条record,09上有2949条record(3092+2949=6041条,和上面输出的6040不一样。。??)</p></li><li><p>第二个是count操作</p><p><img src="/images/SparkML电影推荐流程分析/j2说.PNG" alt="j2说"></p><p>(不懂为什么右上角是distinct,难道是在distinct里进行count操作,所以这么显示??)</p><p> (在存储方面,08上是3020条record,09上是3021条record(加起来还是6041,为什么不是6040??)</p></li></ul></li><li><p>接下来进行numMovies统计</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> numMovies = ratings.map(_._2.product).distinct.count</span><br><span class="line">numMovies: <span class="type">Long</span> = <span class="number">3706</span></span><br></pre></td></tr></table></figure><p>统计出来numMovies是3706部。</p><p>此时为Job3,和上一步一样,分为两个阶段,也是distinct和count操作。DAG图和上一步差不多。不截图了。在存储方面,distinct操作中08是3619条record,09上是3600条record。在count操作中08是3620条record,09是3599条数据。两个操作中的总数都是7219条record。</p></li><li><p>定义numPartions</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> numPartitions = <span class="number">4</span></span><br><span class="line">numPartitions: <span class="type">Int</span> = <span class="number">4</span></span><br></pre></td></tr></table></figure></li><li><p>之后几步都是从ratings对数据进行切分,产生ML中的数据集。第一个数据集是训练集</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">scala> :paste</span><br><span class="line"><span class="comment">// Entering paste mode (ctrl-D to finish)</span></span><br><span class="line"><span class="keyword">val</span> training = ratings.filter(x => x._1 < <span class="number">6</span>)</span><br><span class="line"> .values</span><br><span class="line"> .union(myRatingsRDD)</span><br><span class="line"> .repartition(numPartitions)</span><br><span class="line"> .cache()</span><br><span class="line"></span><br><span class="line"><span class="comment">// Exiting paste mode, now interpreting.</span></span><br><span class="line"></span><br><span class="line">training: org.apache.spark.rdd.<span class="type">RDD</span>[org.apache.spark.mllib.recommendation.<span class="type">Rating</span>] = <span class="type">MapPartitionsRDD</span>[<span class="number">21</span>] at repartition at <console>:<span class="number">45</span></span><br></pre></td></tr></table></figure><p>此时web中并没有变化,因为没有Actions操作。但是用cache将其存在了内存中。并且注意到和myRatingsRDD进行了union操作,并进行了repartition。</p></li><li><p>产生验证集</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line">scala> :paste</span><br><span class="line"><span class="comment">// Entering paste mode (ctrl-D to finish)</span></span><br><span class="line"></span><br><span class="line"><span class="keyword">val</span> validation = ratings.filter(x => x._1 >= <span class="number">6</span> && x._1 < <span class="number">8</span>)</span><br><span class="line"> .values</span><br><span class="line"> .repartition(numPartitions)</span><br><span class="line"> .cache()</span><br><span class="line"></span><br><span class="line"><span class="comment">// Exiting paste mode, now interpreting.</span></span><br><span class="line"></span><br><span class="line">validation: org.apache.spark.rdd.<span class="type">RDD</span>[org.apache.spark.mllib.recommendation.<span class="type">Rating</span>] = <span class="type">MapPartitionsRDD</span>[<span class="number">28</span>] at repartition at <console>:<span class="number">42</span></span><br></pre></td></tr></table></figure><p>web中还没有变化。进行了repartition。</p></li><li><p>产生测试集</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> test = ratings.filter(x => x._1 >= <span class="number">8</span>).values.cache()</span><br><span class="line">test: org.apache.spark.rdd.<span class="type">RDD</span>[org.apache.spark.mllib.recommendation.<span class="type">Rating</span>] = <span class="type">MapPartitionsRDD</span>[<span class="number">30</span>] at values at <console>:<span class="number">38</span></span><br></pre></td></tr></table></figure></li><li><p>统计训练集大小</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> numTraining = training.count()</span><br><span class="line">numTraining: <span class="type">Long</span> = <span class="number">602252</span></span><br></pre></td></tr></table></figure><p>因为进行了count操作,此时web有repartition了信息。</p><p>为Job4,分为两个阶段,第一个为repartition,第二个是count操作。</p><ul><li><p>第一个为repartition,DAG图为</p><p><img src="/images/SparkML电影推荐流程分析/j4s1.PNG" alt="j4s1"></p><p>这里因为是对ratings操作,ratingsRDD已经cache过,所以直接读取,进行filter操作,然后与myRatings进行union,然后进行repartition。</p><p>存储方面,08上有303152条record,09上有299100条record,一共303152+299100=602252条record。</p></li><li><p>第二个是count操作,DAG图为</p><p><img src="/images/SparkML电影推荐流程分析/j4s2.PNG" alt="j4s2"></p><p>存储方面,08上有301126条record,09上有301126条record,一共301126+301126=602252条。</p></li><li><p>在web上的storage界面,显示Partitions已经为4:</p><p><img src="/images/SparkML电影推荐流程分析/j4stor.PNG" alt="j4stor"></p><p>前几个的RDD除了第一个rdd是一个partition,其他都是两个partition。RDD doc中关于partition是这样说的:“Normally, Spark tries to set the number of partitions automatically based on your cluster”。前几个都是spark自动生成的partition。</p></li></ul></li><li><p>统计验证集大小</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> numValidation = validation.count()</span><br><span class="line">numValidation: <span class="type">Long</span> = <span class="number">198919</span></span><br></pre></td></tr></table></figure><p>此时为Job5,与上一步一样,同样分为repartition和count操作。</p><ul><li><p>repartition操作DAG图为</p><p><img src="/images/SparkML电影推荐流程分析/j5s1.PNG" alt="j5s1"></p><p>因为没有上一步的union操作,所以这里直接从以前cache过的RDD[3]进行filter,repartition操作。存储方面,08上有100299条数据,09上有98620条数据,一共100299+98620=198919条数据。</p></li><li><p>count操作</p><p><img src="/images/SparkML电影推荐流程分析/j5s2.PNG" alt="j5s2"></p><p>数据方面,08上有99459条数据,09上有99460条数据,一共99459+99460=198919条数据。</p></li><li><p>在web上的storage界面,partitions同样为4。(未截图)</p></li></ul></li><li><p>统计测试集数据大小。</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> numTest = test.count()</span><br><span class="line">numTest: <span class="type">Long</span> = <span class="number">199049</span></span><br></pre></td></tr></table></figure><p>此时为Job6阶段,因为对test没有进行repartition操作,这里只有count操作。</p><ul><li><p>DAG图为</p><p><img src="/images/SparkML电影推荐流程分析/j6s.PNG" alt="j6s"></p><p>在存储方面,08上有503331条record,09上有496878条record,一共503331+496878=1000209数据。(为什么与输出不一致?)</p><p>另外,在web的storage界面,因为没有repartition操作,产生的rdd[30]为两个partition。</p></li></ul></li><li><p>接下来是准备训练的一些参数</p><figure class="highlight scala"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br><span class="line">16</span><br><span class="line">17</span><br><span class="line">18</span><br><span class="line">19</span><br><span class="line">20</span><br><span class="line">21</span><br><span class="line">22</span><br><span class="line">23</span><br></pre></td><td class="code"><pre><span class="line">scala> <span class="keyword">val</span> ranks = <span class="type">List</span>(<span class="number">8</span>, <span class="number">12</span>)</span><br><span class="line">ranks: <span class="type">List</span>[<span class="type">Int</span>] = <span class="type">List</span>(<span class="number">8</span>, <span class="number">12</span>)</span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">val</span> lambdas = <span class="type">List</span>(<span class="number">0.1</span>, <span class="number">10.0</span>)</span><br><span class="line">lambdas: <span class="type">List</span>[<span class="type">Double</span>] = <span class="type">List</span>(<span class="number">0.1</span>, <span class="number">10.0</span>)</span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">val</span> numIters = <span class="type">List</span>(<span class="number">10</span>, <span class="number">20</span>)</span><br><span class="line">numIters: <span class="type">List</span>[<span class="type">Int</span>] = <span class="type">List</span>(<span class="number">10</span>, <span class="number">20</span>)</span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">var</span> bestModel: <span class="type">Option</span>[<span class="type">MatrixFactorizationModel</span>] = <span class="type">None</span></span><br><span class="line">bestModel: <span class="type">Option</span>[org.apache.spark.mllib.recommendation.<span class="type">MatrixFactorizationModel</span>] = <span class="type">None</span></span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">var</span> bestValidationRmse = <span class="type">Double</span>.<span class="type">MaxValue</span></span><br><span class="line">bestValidationRmse: <span class="type">Double</span> = <span class="number">1.7976931348623157E308</span></span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">var</span> bestRank = <span class="number">0</span></span><br><span class="line">bestRank: <span class="type">Int</span> = <span class="number">0</span></span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">var</span> bestLambda = <span class="number">-1.0</span></span><br><span class="line">bestLambda: <span class="type">Double</span> = <span class="number">-1.0</span></span><br><span class="line"></span><br><span class="line">scala> <span class="keyword">var</span> bestNumIter = <span class="number">-1</span></span><br><span class="line">bestNumIter: <span class="type">Int</span> = <span class="number">-1</span></span><br></pre></td></tr></table></figure><p>此时没有rdd产生,web上没有变化。</p></li><li><p>此时进行了模型训练,调用ML库中的ALS(交替最小二乘 alternating least squares)。此时产生了很多的操作,且数据不清晰,有大量的矩阵操作) </p></li></ol>]]></content>
<summary type="html">
<h1 id="SparkML电影推荐流程分析"><a href="#SparkML电影推荐流程分析" class="headerlink" title="SparkML电影推荐流程分析"></a>SparkML电影推荐流程分析</h1><p>之前采用<code>spark-su
</summary>
<category term="Spark" scheme="http://chzhou.cc/tags/Spark/"/>
</entry>
<entry>
<title>Vulnerable Contracts Resource</title>
<link href="http://chzhou.cc/2018/06/09/Vulnerable%20Contracts%20Resource/"/>
<id>http://chzhou.cc/2018/06/09/Vulnerable Contracts Resource/</id>
<published>2018-06-09T08:54:28.000Z</published>
<updated>2019-03-18T15:04:00.529Z</updated>
<content type="html"><![CDATA[<ul><li>Hackthiscontract.io (<a href="http://hackthiscontract.io/dashboard?address=0x957B256d320f03A9Be873380772F3Deb2AD78dE3" target="_blank" rel="noopener">地址</a>)<ul><li>需要输入Rinkeby address进行登陆</li><li>只提供了四个合约进行攻击游戏,分别是 ‘Naive Programmer’(over- and under-flow), ‘ERC20’, ‘Coin Flip’, ‘Lost Ether’</li></ul></li><li>trail of bits/not-so-smart-contracts (<a href="https://github.com/trailofbits/not-so-smart-contracts" target="_blank" rel="noopener">地址</a>)<ul><li>contains examples of common Ethereum smart contract vulnerabilities, including code from real smart contracts</li><li>合约类型<ul><li>Honeypots:6个</li><li>Integer overflow:1个</li><li>Missing constructor:2个</li><li>Race condition(TOD):1个</li><li>Reentrancy:1个</li><li>Unchecked external call:1个</li><li>Unprotected function:2个</li><li>Variable shadowing:1个</li><li>Wrong interface:1个</li></ul></li><li>Oyente能检测出来的(未实际检测,仅理论)<ul><li>Integer overflow</li><li>Race condition(TOD)</li><li>Reentrancy</li></ul></li></ul></li><li>GOATCasino (<a href="https://github.com/nccgroup/GOATCasino" target="_blank" rel="noopener">地址</a>)<ul><li>只有一个,主文件是Lottery.sol,类似于在《Survey of Smart Contract Attacks》论文中的4.3节Multi-player games</li></ul></li><li>ethernaut (<a href="https://github.com/OpenZeppelin/ethernaut" target="_blank" rel="noopener">地址1</a>) (<a href="https://ethernaut.zeppelin.solutions/" target="_blank" rel="noopener">地址2</a>)<ul><li>同样也是游戏网站,提供的例子比Hackthiscontract.io要多一些</li><li>有fallback,Reentrancy等漏洞合约</li></ul></li><li><p>capturetheether(<a href="https://capturetheether.com/challenges/" target="_blank" rel="noopener">地址</a>)</p><ul><li>tokensale: Integer Overflow</li></ul></li></ul>]]></content>
<summary type="html">
<ul>
<li>Hackthiscontract.io (<a href="http://hackthiscontract.io/dashboard?address=0x957B256d320f03A9Be873380772F3Deb2AD78dE3" target="_bl
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>creation code, input data, solc编译出来的code,这三种code有什么区别</title>
<link href="http://chzhou.cc/2018/06/04/creation-code-input-data-solc%E7%BC%96%E8%AF%91%E5%87%BA%E6%9D%A5%E7%9A%84code%EF%BC%8C%E8%BF%99%E4%B8%89%E7%A7%8Dcode%E6%9C%89%E4%BB%80%E4%B9%88%E5%8C%BA%E5%88%AB/"/>
<id>http://chzhou.cc/2018/06/04/creation-code-input-data-solc编译出来的code,这三种code有什么区别/</id>
<published>2018-06-04T15:27:11.000Z</published>
<updated>2018-06-04T15:28:44.331Z</updated>
<content type="html"><![CDATA[<h2 id="三种code"><a href="#三种code" class="headerlink" title="三种code"></a>三种code</h2><ul><li><p>creation code</p><ul><li>在后添加了Constructor Arguments </li></ul></li><li><p>input data</p><ul><li><p>在用命令创建contract的时候需要输入的data</p></li><li><p>格式为bin</p></li><li><p><a href="https://medium.com/@gus_tavo_guim/deploying-a-smart-contract-the-hard-way-8aae778d4f2a" target="_blank" rel="noopener">根据这个blog</a>和<a href="https://github.com/ethereum/go-ethereum/wiki/Contract-Tutorial" target="_blank" rel="noopener">Geth文档</a>,Solc编译出来的bin code作为参数传入创建合约的命令中。</p><ul><li><p>Medium blog</p><figure class="highlight javascript"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> deployTransationObject = { <span class="attr">from</span>: eth.accounts[<span class="number">0</span>], <span class="attr">data</span>: storageBinCode, <span class="attr">gas</span>: <span class="number">1000000</span> };</span><br><span class="line"><span class="keyword">var</span> storageInstance = storageContract.new(deployTransationObject)</span><br></pre></td></tr></table></figure></li><li><p>Geth 文档</p><figure class="highlight javascript"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">var</span> greeter = greeterContract.new(_greeting,{<span class="attr">from</span>:web3.eth.accounts[<span class="number">0</span>], <span class="attr">data</span>: greeterCompiled.greeter.code, <span class="attr">gas</span>: <span class="number">1000000</span>}, <span class="function"><span class="keyword">function</span>(<span class="params">e, contract</span>)</span></span><br></pre></td></tr></table></figure></li></ul></li></ul></li></ul><ul><li><p>solc</p><ul><li>solc的关于编译的有两个arguments,其中一个是 <code>--bin</code>,解释为Binary of the contracts in hex,另外一个是<code>--bin-runtime</code>,解释为Binary of the runtime part of the contracts in hex。</li><li>根据这个<a href="https://ethereum.stackexchange.com/questions/13086/solc-bin-vs-bin-runtime" target="_blank" rel="noopener">回答</a>,<code>--bin-runtime</code> is the code that is actually placed on the blockchain. The regular <code>--bin</code> output is the code placed on the blockchain <strong>plus</strong> the code needed to get this code placed on the blockchain</li><li>上面的Medium blog是调用了<code>--bin</code> 命令,而Geth官方文档用的是web3.eth.compile.solidity命令,推测也应该用的是<code>--bin</code>。(未证实)</li></ul></li></ul><h2 id="后续进展"><a href="#后续进展" class="headerlink" title="后续进展"></a>后续进展</h2><ul><li><p>creation code 与 input data</p><p>在Etherscan上寻找几个verified的合约进行验证,得出<strong>creation code</strong>和<strong>input data</strong>是完全一致的。代码由<strong>三部分</strong>组成,第一部分是前面的一些数字,代表着初始化合约的init过程。第二部分是合约的主体过程。第三部分是合约的Constructor Arguments,被添加到了最后。</p></li><li><p>solc</p><p>在stack exchange上进行了<a href="https://ethereum.stackexchange.com/questions/50180/whats-the-contract-creation-code-in-etherscan-verfied-contract" target="_blank" rel="noopener">提问</a>,问题是creation code和solc编译出来的code有何区别。回答是Contract Creation Code is the full bytecode from what contract was deployed, <strong>including constructor parameters</strong>。如果合约没有constructor parameters的话,那么这两种code都是一致的。</p></li></ul><h2 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h2><ul><li><p><a href="https://etherscancom.freshdesk.com/support/solutions/articles/35000022165-contract-verification-constructor-arguments" target="_blank" rel="noopener">https://etherscancom.freshdesk.com/support/solutions/articles/35000022165-contract-verification-constructor-arguments</a></p></li><li><p><a href="https://ethereum.stackexchange.com/questions/13086/solc-bin-vs-bin-runtime" target="_blank" rel="noopener">https://ethereum.stackexchange.com/questions/13086/solc-bin-vs-bin-runtime</a></p></li><li><p><a href="https://medium.com/@gus_tavo_guim/deploying-a-smart-contract-the-hard-way-8aae778d4f2a" target="_blank" rel="noopener">https://medium.com/@gus_tavo_guim/deploying-a-smart-contract-the-hard-way-8aae778d4f2a</a></p></li><li><p><a href="https://github.com/ethereum/go-ethereum/wiki/Contract-Tutorial" target="_blank" rel="noopener">https://github.com/ethereum/go-ethereum/wiki/Contract-Tutorial</a></p></li><li><p><a href="https://ethereum.stackexchange.com/questions/50180/whats-the-contract-creation-code-in-etherscan-verfied-contract" target="_blank" rel="noopener">https://ethereum.stackexchange.com/questions/50180/whats-the-contract-creation-code-in-etherscan-verfied-contract</a></p></li><li><p>验证creation code和input data是否一致的几个合约</p><ul><li><a href="https://etherscan.io/address/0xcac337492149bdb66b088bf5914bedfbf78ccc18#code" target="_blank" rel="noopener">https://etherscan.io/address/0xcac337492149bdb66b088bf5914bedfbf78ccc18#code</a></li><li><a href="https://etherscan.io/address/0x7c333b69021b3ad9288d3b0083f9bd27c6d4680a#code" target="_blank" rel="noopener">https://etherscan.io/address/0x7c333b69021b3ad9288d3b0083f9bd27c6d4680a#code</a></li><li><a href="https://etherscan.io/address/0x233d2daad4018fae14c69b2830bf97057c7fb1b5#code" target="_blank" rel="noopener">https://etherscan.io/address/0x233d2daad4018fae14c69b2830bf97057c7fb1b5#code</a></li></ul><p>注:这三个合约的最后都有Constructor Arguments ,不知这个有没有影响导致两种code一致。但是现在比较新的合约在verify的时候都需要提供Constructor Arguments,所以就不加考虑这个因素以及旧的没有Constructor Arguments的合约。</p><p>注:在最新的verified contract里面,检查了最新的前三个合约的两种code,是完全一致的。</p></li></ul>]]></content>
<summary type="html">
<h2 id="三种code"><a href="#三种code" class="headerlink" title="三种code"></a>三种code</h2><ul>
<li><p>creation code</p>
<ul>
<li>在后添加了Constructor A
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>DeepLab v3+: prediction out of bound</title>
<link href="http://chzhou.cc/2018/05/31/DeepLab-v3-predition-out-of-bound/"/>
<id>http://chzhou.cc/2018/05/31/DeepLab-v3-predition-out-of-bound/</id>
<published>2018-05-31T15:57:48.000Z</published>
<updated>2018-05-31T16:02:09.430Z</updated>
<content type="html"><![CDATA[<h2 id="出现情景"><a href="#出现情景" class="headerlink" title="出现情景"></a>出现情景</h2><p>最近在用DeepLab v3+ 训练模型,已经训练好了自己的数据集。可是在验证的时候,程序总是报错。抛出<code>prediction out of bound</code> 的错误。意思很好理解,就是预测超出了范围。但是范围是什么呢?又是如何超出的呢?经过搜索,找出了答案。</p><h2 id="文件代码"><a href="#文件代码" class="headerlink" title="文件代码"></a>文件代码</h2><p>在/deeplab/datasets文件夹下,有一个名为 <em>segmentation_dataset.py</em> 的文件。在该文件夹里,就定义了训练集和验证集的信息。代码如下:</p><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line">_PASCAL_VOC_SEG_INFORMATION = DatasetDescriptor(</span><br><span class="line"> splits_to_sizes={</span><br><span class="line"> <span class="string">'train'</span>: <span class="number">7</span>,</span><br><span class="line"> <span class="string">'trainval'</span>: <span class="number">0</span>,</span><br><span class="line"> <span class="string">'val'</span>: <span class="number">3</span>,</span><br><span class="line"> },</span><br><span class="line"> num_classes=<span class="number">8</span>,</span><br><span class="line"> ignore_label=<span class="number">255</span>,</span><br><span class="line">)</span><br></pre></td></tr></table></figure><p>这里已经针对自己的数据集进行了修改。其中<code>train</code> 和 <code>val</code>的字段意思就是对应数据集的大小。因为我只是测试,所以这里我的训练集就只有7张,验证集是3张。下面的<code>num_classes</code>和<code>ignore_lable</code>是数据集的类别数目和忽视的类别。而我出问题的就在这个<code>num_classes</code>上。</p><h2 id="问题来源"><a href="#问题来源" class="headerlink" title="问题来源"></a>问题来源</h2><p>直觉认为这里<code>num_classes</code>就是类别的数目,当然这个想法也是对的。但是这里的前提是<strong>数据集的lable标记是从1开始的</strong>,也就是说,你的类别从1,2,3,…,num_classes这样定义的。但是这个是很反人类的,因为有的时候为了更加直观理解,并不一定从1开始。比如这次百度提供的数据集,车的标记就是33。而我的写的<code>num_classes</code> 是8,自然33要大于8,就抛出了<code>prediction out of bound</code>的错误了。</p><h2 id="后记"><a href="#后记" class="headerlink" title="后记"></a>后记</h2><p>在查到这个问题后,将自己的<code>num_classes</code>变成了这次数据集对应的lableID,但是这次百度给的数据集的id是乘以1000的,所以车的id33,就变成了33000。我这样改之后,训练的时候又爆出了<code>OOM</code> 的问题,也就是说训练爆内存了,显然是因为<code>num_classes</code>的数目太大。</p><p>这样就涉及到了修改对应的id问题,也就是说把车的id经过修改变成1。<a href="https://gist.github.com/DrSleep/4bce37254c5900545e6b65f6a0858b9c" target="_blank" rel="noopener">具体的方案在这里</a>。</p><h2 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h2><ol><li><a href="https://github.com/tensorflow/models/issues/3906中@BillBai的回答" target="_blank" rel="noopener">https://github.com/tensorflow/models/issues/3906中@BillBai的回答</a></li><li><a href="https://gist.github.com/DrSleep/4bce37254c5900545e6b65f6a0858b9c" target="_blank" rel="noopener">https://gist.github.com/DrSleep/4bce37254c5900545e6b65f6a0858b9c</a></li></ol>]]></content>
<summary type="html">
<h2 id="出现情景"><a href="#出现情景" class="headerlink" title="出现情景"></a>出现情景</h2><p>最近在用DeepLab v3+ 训练模型,已经训练好了自己的数据集。可是在验证的时候,程序总是报错。抛出<code>pred
</summary>
<category term="DeepLab" scheme="http://chzhou.cc/tags/DeepLab/"/>
<category term="DL" scheme="http://chzhou.cc/tags/DL/"/>
<category term="ML" scheme="http://chzhou.cc/tags/ML/"/>
</entry>
<entry>
<title>获取合约在链上运行的实际Code</title>
<link href="http://chzhou.cc/2018/05/29/%E8%8E%B7%E5%8F%96%E5%90%88%E7%BA%A6%E5%9C%A8%E9%93%BE%E4%B8%8A%E8%BF%90%E8%A1%8C%E7%9A%84%E5%AE%9E%E9%99%85Code/"/>
<id>http://chzhou.cc/2018/05/29/获取合约在链上运行的实际Code/</id>
<published>2018-05-29T15:59:46.000Z</published>
<updated>2018-05-29T16:02:14.774Z</updated>
<content type="html"><![CDATA[<h2 id="目的"><a href="#目的" class="headerlink" title="目的"></a>目的</h2><p>在Etherscan网站上获取到的bytecode,是”Contract Creation Code”,这个code 里面在前面添加了constructor信息。为了比对</p><ul><li>web3.eth.getCode(Address)</li><li>solc编译</li><li>Etherscan网站获取的Contract Creation Code</li></ul><p>这三者的差异,以便为之后的合约检查做准备,需要获得某个合约地址的code。</p><p>在这里,后两种方法很简单。主要遇到的问题在第一个方法。</p><h2 id="Geth客户端内web3调用"><a href="#Geth客户端内web3调用" class="headerlink" title="Geth客户端内web3调用"></a>Geth客户端内web3调用</h2><p>在用web3调用的时候,需要在geth客户端里运行命令。</p><p>首先用<code>geth console</code>命令启动geth,进入console界面。但是通过调用web3的一些诸如获取交易信息,余额等命令,总是返回错误或者0。经过网上查询得知,这是因为geth客户端并没有把所有的主网结点信息下载下来,那么这样调用的结果自然就是错误。</p><p>在服务器上花了一晚上的时间,把主网的所有结点都下载了下来,结果调用后还是返回错误。经过进一步的查询得知,下载数据只是第一步,还有进行交叉验证信息这一步。第二步才是最耗费时间的步骤 。有个网友反映,自己下载所有的结点花了几个小时,结果进行验证这个步骤就花了一周多,并且产生的数据就有220G多。</p><p>所以弃用在geth客户端内进行web3命令的计划。</p><h2 id="Infura-RPC-命令"><a href="#Infura-RPC-命令" class="headerlink" title="Infura + RPC 命令"></a>Infura + RPC 命令</h2><p>上一个步骤中,web3命令之所以不成功,主要是因为没有主网的所有结点信息。那也就是说有了信息后,那就可以调用了。此时,Infura派上用场。</p><blockquote><p>Infura:Infura 提供公开的 Ethereum 主网和测试网络节点</p></blockquote><p>在<a href="https://infura.io/" target="_blank" rel="noopener">Infura</a>官网上进行注册,便获得个人的API,此时不需要自己下载或者连接到主网,通过API访问,便可以取得主网的一切信息。</p><p><img src="https://i.loli.net/2018/05/29/5b0d5a4e3b19f.png" alt="Infura提供的链接.PNG"></p><p>此时已经有了主网结点信息。接下来就是如何使用web3命令。</p><p>web3命令,其实是调用的JSON-RPC。比如<code>web3.eth.getCode</code>, 在JSON-RPC中,对应的就是<code>eth_getCode</code>。所以便可以直接通过调用RPC的命令,就可以达到web3调用命令的目的。</p><p>通过查阅RPC的Doc,得到getCode的命令是</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br></pre></td><td class="code"><pre><span class="line">curl -X POST --data '{"jsonrpc":"2.0","method":"eth_getCode","params":["0xa94f5374fce5edbc8e2a8697c15331677e6ebf0b", "0x2"],"id":1}'</span><br></pre></td></tr></table></figure><p>那么调用就是<code>"RPC命令"+"https://mainnet.infura.io/<YOUR-API-KEY>"</code>,这样就完成了对主网信息的查询。</p><p>附上一个查询当前最新区块的示例命令:</p><figure class="highlight plain"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br></pre></td><td class="code"><pre><span class="line">curl -X POST --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' https://mainnet.infura.io/<YOUR-API-KEY></span><br><span class="line">{"jsonrpc":"2.0","id":1,"result":"0x56ef50"}</span><br></pre></td></tr></table></figure><p>在这里,<code>id</code>为“1”,说明此时连接的网络为主网。返回的result结果是<code>0x56ef50</code>,是16进制,转换成10进制,则值为<code>5697360</code>。正和当前的区块最高高度一致。</p><h2 id="参考资料"><a href="#参考资料" class="headerlink" title="参考资料"></a>参考资料</h2><ol><li><p>Infura官网</p><p><a href="https://infura.io/" target="_blank" rel="noopener">https://infura.io/</a></p></li><li><p>web3文档</p><p><a href="https://github.com/ethereum/wiki/wiki/JavaScript-API" target="_blank" rel="noopener">https://github.com/ethereum/wiki/wiki/JavaScript-API</a></p></li><li><p>JSON-RPC文档</p><p><a href="https://github.com/ethereum/wiki/wiki/JSON-RPC" target="_blank" rel="noopener">https://github.com/ethereum/wiki/wiki/JSON-RPC</a></p></li></ol>]]></content>
<summary type="html">
<h2 id="目的"><a href="#目的" class="headerlink" title="目的"></a>目的</h2><p>在Etherscan网站上获取到的bytecode,是”Contract Creation Code”,这个code 里面在前面添加了con
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>Vulnerable Contracts Resource</title>
<link href="http://chzhou.cc/2018/05/26/ETH%20other%20known%20bugs/"/>
<id>http://chzhou.cc/2018/05/26/ETH other known bugs/</id>
<published>2018-05-26T13:47:43.000Z</published>
<updated>2019-03-18T15:07:33.284Z</updated>
<content type="html"><![CDATA[<ul><li><p><strong>51% attack</strong> (blockchain)</p><ul><li>历史上曾经发生超过过10次51%攻击,仅在上周(2018年5月26日)就发生了3次攻击</li><li>日本加密货币 Monacoin 被一名矿工获得了 57% 的网络算力</li><li>BTG被黑客控制了51%的网络总算力,通过控制区块回滚进行了“双花攻击”</li></ul><p>解决方法:</p><ul><li>保持算力分散 </li><li>避免与其他区块链的PoW算法冲突</li><li>预警机制</li><li>与矿池,交易所建立有效的沟通渠道</li></ul></li><li><p>短地址攻击 (EVM) </p><p>ERC20代币标准中,有一个标准化的transfer函数</p><p><code>function transfer(address _to, uint256 _value) returns (bool success)</code>, 当我们真正调用transfer的时候,在EVM里实际上是在解析一堆ABI字符。在解析到的字符串里,金额在目标地址的后面,并且是紧贴着的。</p><p>假如我们有这么一个地址,</p><p>0x12345678901234567890123456789012345678<strong>00</strong></p><p>如果进行交易的时候,故意把地址末尾两个0去掉,那么EVM就会从<code>_value</code>的高位取0来进行补充。这样的话<code>_value</code>就少位数了,EVM在之后就会给金额后面补零来处理<code>_value</code> ,意味着该数值左移,增加了16*16=256倍。</p><p>解决方法:</p><ul><li>交易所在提币的时候,需要严格校验用户输入的地址,这样可以尽早在前端就禁止掉恶意的短地址</li></ul></li></ul>]]></content>
<summary type="html">
<ul>
<li><p><strong>51% attack</strong> (blockchain)</p>
<ul>
<li>历史上曾经发生超过过10次51%攻击,仅在上周(2018年5月26日)就发生了3次攻击</li>
<li>日本加密货币 Monacoin 被一名矿
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>ETH safe lib</title>
<link href="http://chzhou.cc/2018/05/25/ETH%20safe%20lib/"/>
<id>http://chzhou.cc/2018/05/25/ETH safe lib/</id>
<published>2018-05-25T14:57:08.000Z</published>
<updated>2019-03-18T15:04:59.596Z</updated>
<content type="html"><![CDATA[<ul><li><p>BANKEX/solidity-float-point-calculation</p><p><a href="https://github.com/BANKEX/solidity-float-point-calculation" target="_blank" rel="noopener">GItHub地址</a>(24 star)</p><p>BANKEX Foundation is building an ecosystem that will help ensure transparency, stability and support, needed to implement blockchain technologies for the purpose of <strong>commercial integration</strong></p><p><a href="https://blog.bankex.org/bankex-foundation-floating-point-library-for-solidity-a6dd87636693" target="_blank" rel="noopener">Medium文章链接在这里</a></p><ul><li>该模块通过npm安装,核心文件 FloatMath.sol 在合约中import即可</li><li>解决的痛点:现在的solidity处理的数都是integer,fixed point numbers,并没有float point numbers,这将为以后合约往金融等需要高精度计算的行业的引用增加了困难。这个库就要解决这个问题。</li></ul></li><li><p>ds-math</p><p><a href="https://github.com/dapphub/ds-math" target="_blank" rel="noopener">GItHub地址</a>(35 star)</p><ul><li>import导入即可</li><li>同样是提供类似于SafeMath的库,对计算进行保证。并新提供了名为 wad (18 decimals) 和 ray (27 decimals) 的数据表示方法</li></ul></li><li><p><strong>awesome-solidity</strong></p><p><a href="https://github.com/bkrem/awesome-solidity#tutorials" target="_blank" rel="noopener">GitHub地址</a> (763 star)</p><ul><li>汇集了很多 solidity 相关的资源,从官方资源,教程到第三方工具等等</li></ul></li></ul>]]></content>
<summary type="html">
<ul>
<li><p>BANKEX/solidity-float-point-calculation</p>
<p><a href="https://github.com/BANKEX/solidity-float-point-calculation" target="_bla
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>Oyente Note</title>
<link href="http://chzhou.cc/2018/05/19/Oyente_Note/"/>
<id>http://chzhou.cc/2018/05/19/Oyente_Note/</id>
<published>2018-05-19T15:44:23.000Z</published>
<updated>2019-03-18T15:06:12.995Z</updated>
<content type="html"><![CDATA[<p>#基本文件</p><ul><li>oyente.py:程序的主入口,负责对依赖的检查以及对源文件的获取,通过input_helper进行处理传入symExec.py进行处理</li><li>analysis.py :==To do== 创建了检查 reentrancy bug, 计算gas值,检查false positive, 检查资金流是否等函数(调用了z3)(但前两个函数并未在其他文件中被调用)</li><li>ast_helper.py:==To do== 将状态转换为ast树(抽象语法树)</li><li>ast_walker.py:==To do== 遍历ast</li><li>basicblock.py:定义一个叫basicblock的类,其中定义一些函数,将在symExec中模拟opcode进行调用</li><li>batch_run.py:批量分析运行合约</li><li>ethereum_data.py:创建自己的数据源,从etherscan获得最新的代码和余额信息</li><li>ethereum_data1.py:创建自己的数据源,从某个服务器上获得最新的代码和余额信息</li><li>global_params.py:定义一些全局参数值</li><li>input_helper.py:对输入进行处理,比如从输入生成disasm文件等</li><li>opcodes.py:对opcode及对应消耗的gas值进行定义</li><li>run_tests.py:读取test_evm/test_data中的json文件进行测试</li><li>source_map.py:==To do== (获得源代码,获得函数,变量名字,转换等功能,主要从ast, input_helper获得帮助,提供给symExec,vulnerability)</li><li>symExec.py:程序的主要分析模块,涉及到的主要有创建cfg,对opcode进行模拟,检测漏洞等</li><li>utils.py:对在symExec涉及到的函数进行定义,比如对在if判断中两条路径都涉及到的变量进行重命名以进行区分等</li><li>vargenerator.py:在分析中生成所需要的符号变量</li><li>vulnerability.py:定义了各种vulnerability类,以及去除false positive的函数</li><li>state.json:提供分析的初始状态</li></ul><h1 id="分析流程"><a href="#分析流程" class="headerlink" title="分析流程"></a>分析流程</h1><ol><li><p>oyente.py</p><ul><li>从命令行中获取参数,对输入文件的格式,超时时间等全局的状态进行定义</li><li>可以对solidity program, evm bytecode, remote contracts三种格式的文件进行分析</li><li>对文件用<code>evm disasm</code>命令将其分解为opcodes,继而传入<strong>symExec.py</strong>中</li></ul></li><li><p>symExec.py</p><ol><li><ul><li><figure class="highlight python"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">def</span> <span class="title">build_cfg_and_analyze</span><span class="params">()</span>:</span></span><br><span class="line"> tokens = tokenize.generate_tokens(disasm_file.readline)</span><br><span class="line"> collect_vertices(tokens)</span><br><span class="line"> construct_bb()</span><br><span class="line"> construct_static_edges()</span><br><span class="line"> full_sym_exec()</span><br></pre></td></tr></table></figure></li><li><p><code>collect_vertices()</code>和 <code>construct_bb()</code>识别程序中的基本区块,将其存储为顶点。基本区块的识别是通过<code>JUMPDEST</code>, <code>STOP</code>, <code>RETURN</code>, <code>SUICIDE</code>, <code>JUMP</code> and <code>JUMPI</code> 作为分隔符进行的。每个基本区块都是basicblock.py的实例</p></li><li><p>将基本区块建好后,用<code>full_sym_exec</code>进行处理。对每个区块其中的opcode用<code>sym_exec_ins</code>进行处理</p></li><li><p><code>sym_exec_ins</code>对所有的opcode按照yellow_paper中的描述进行尽可能地模拟</p></li></ul></li><li><p>对 time dependency, callstack, reentrancy等漏洞创建检测函数返回结果</p></li><li><p>analysis.py, basicblock.py, global_params.py, input_helper.py, opcodes.py, source_map.py, utils.py, vargenerator.py, vulnerability.py, state.json这些文件在执行过程中提供所需要的类,函数,状态等</p></li><li><p>一些检测漏洞的标准(在项目的code.md中获取)</p><ul><li><p>Callstack attack</p><p>执行sysExec.py中的<strong>check_callstack_attack</strong>函数,如果<code>CALL</code>或者<code>CALLCODE</code>指令发现没有<code>SWAP4, POP, POP, POP, POP, ISZERO</code>(或 SWAP3 followed by 3 POP, etc.) 在其后,则判断为有该漏洞(<code>if(owner.send(amount))</code> 生成的opcode序列即为如此,这是防止该类攻击的推荐写</p></li><li><p>Timestamp dependence attack</p><p>如果<code>path_condition</code>中的变量包含跟时间戳有关的符号变量,即为有该漏洞</p></li></ul></li></ol></li></ol><h1 id="一些注意事项和问题"><a href="#一些注意事项和问题" class="headerlink" title="一些注意事项和问题"></a>一些注意事项和问题</h1><p>##程序方面</p><ol><li><p>oyente.py</p><ul><li>z3的版本为4.5.1</li><li>evm的版本为1.7.3</li><li>solc的版本为0.4.19</li><li>oyente的版本为0.2.7</li></ul></li><li>symExec.py<ul><li><code>SUICIDE</code> 已经被 <code>SELFDESTRUCT</code> 取代</li></ul></li><li>opcodes.py<ul><li>在gas模块<ul><li><code>Gextcode</code> 消耗的gas值已从20转为700</li><li><code>Gsload</code>从50变为200</li><li><code>Gsuiside</code>已经为<code>Gselfdesturct`</code></li><li><code>Gcall</code>从40到700</li><li><code>Gex byte</code>从10变为50</li></ul></li><li>opcode集合有一些缺失</li></ul></li></ol>]]></content>
<summary type="html">
<p>#基本文件</p>
<ul>
<li>oyente.py:程序的主入口,负责对依赖的检查以及对源文件的获取,通过input_helper进行处理传入symExec.py进行处理</li>
<li>analysis.py :==To do== 创建了检查 reentranc
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>Oyente issues</title>
<link href="http://chzhou.cc/2018/05/11/Oyente%20issues/"/>
<id>http://chzhou.cc/2018/05/11/Oyente issues/</id>
<published>2018-05-11T08:50:16.000Z</published>
<updated>2019-03-18T15:15:20.088Z</updated>
<content type="html"><![CDATA[<h1 id="Oyente-issues"><a href="#Oyente-issues" class="headerlink" title="Oyente issues"></a>Oyente issues</h1><ul><li>Flag <code>CALLCODE</code> is now deprecated. <code>CALL</code> and <code>DELEGATECALL</code> are needed.</li><li>对于 <code>MULTIPLICATION</code>造成的<code>overflow</code>还没有支持。只能检查出由<code>ADD</code>造成的溢出。(经过对BEC代码的检查,发现该issue仍未解决)</li></ul><p><img src="https://cdn-images-1.medium.com/max/2000/0*MZnv5M0iCeQ7eCGp.png" alt="BEC的overflow"></p><ul><li>可能会检测出 false positive 的 <code>Re-Entrancy</code> 的漏洞</li></ul>]]></content>
<summary type="html">
<h1 id="Oyente-issues"><a href="#Oyente-issues" class="headerlink" title="Oyente issues"></a>Oyente issues</h1><ul>
<li>Flag <code>CALLCODE<
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>ETH白皮书笔记</title>
<link href="http://chzhou.cc/2018/04/24/ETH%E7%99%BD%E7%9A%AE%E4%B9%A6%E7%AC%94%E8%AE%B0/"/>
<id>http://chzhou.cc/2018/04/24/ETH白皮书笔记/</id>
<published>2018-04-24T04:45:53.000Z</published>
<updated>2019-03-18T15:14:18.559Z</updated>
<content type="html"><![CDATA[<p>#历史</p><ul><li><p>拜占庭将军问题</p><ul><li>如果有N方参与到系统中,那么系统可以容忍N/4的恶意参与者</li><li><p>问题在于,在匿名的情况下,系统设置的安全边界容易遭受女巫攻击,因为一个攻击者可以在一台服务器或者僵尸网络上创建数以千计的节点,从而单方面确保拥有多数份额</p><p><strong>解决方法</strong></p></li><li><p>基于节点的去中心化共识协议与工作量证明机制结合在一起</p></li><li>拥有大量算力的节点有更大的影响力,但获得比整个网络更多的算力比创建一百万个节点困难得多(从而解决攻击问题)</li></ul></li><li><p>比特币系统的“状态”是所有已经被挖出的、没有花费的比特币(技术上称为<strong>“未花费的交易输出,unspent transaction outputs 或UTXO”</strong>)的集合。每个UTXO都有一个面值和所有者(由20个字节的本质上是密码学公钥的地址所定义[1])。一笔交易包括一个或多个输入和一个或多个输出。每个输入包含一个对现有UTXO的引用和由与所有者地址相对应的私钥创建的密码学签名。每个输出包含一个新的加入到状态中的UTXO。</p></li><li><p>状态转移函数<code>APPLY(S,TX)->S’</code>大体上可以如下定义: </p><ol><li>交易的每个输入:<ul><li>如果引用的UTXO不存在于现在的状态中(S),返回错误提示</li><li>如果签名与UTXO所有者的签名不一致,返回错误提示</li></ul></li><li>如果所有的UTXO输入面值总额小于所有的UTXO输出面值总额,返回错误提示</li><li>返回新状态S’,新状态S’中移除了所有的输入UTXO,增加了所有的输出UTXO</li></ol><p>第一步的第一部分<strong>防止交易的发送者花费不存在的比特币</strong>,第二部分<strong>防止交易的发送者花费其他人的比特币</strong>。第二步确保<strong>价值守恒</strong>。比特币的支付协议如下。假设Alice想给Bob发送11.7BTC。事实上,Alice不可能正好有11.7BTC。假设,她能得到的最小数额比特币的方式是:6+4+2=12。所以,她可以创建一笔有3个输入,2个输出的交易。第一个输出的面值是11.7BTC,所有者是Bob(Bob的比特币地址),第二个输出的面值是0.3BTC,所有者是Alice自己,也就是找零。</p></li><li><p>区块 = 时间戳 + 随机数 + 对上一个区块的引用(即哈希) + 上一区块生成以来发生的所有交易列表。这样随着更新就能够代表账本的最新状态</p></li><li><p>依照这个范式,检查一个区块是否有效的算法如下:</p><ol><li>检查区块引用的上一个区块是否存在且有效</li><li>检查区块的时间戳是否晚于以前的区块的时间戳,而且早于未来2小时</li><li>检查区块的工作量证明是否有效</li><li>将上一个区块的最终状态赋于<code>S[0]</code></li><li>假设TX是区块的交易列表,包含n笔交易。对于属于0……n-1的所有i,进行状态转换<code>S[i+1] = APPLY(S[i],TX[i])</code>。如果任何一笔交易i在状态转换中出错,退出程序,返回错误</li><li>返回正确,状态<code>S[n]</code>是这一区块的最终状态</li></ol></li><li><p>当出现攻击时到底发生了什么: (防止doublespent)</p><p>因为比特币的密码学基础是非常安全的,所以攻击者会选择攻击没有被密码学直接保护的部分:<strong>交易顺序</strong>。攻击者的策略非常简单: </p><ol><li>向卖家发送100BTC购买商品(尤其是无需邮寄的电子商品)</li><li>等待直至商品发出</li><li>创建另一笔交易,将相同的100BTC发送给自己的账户。</li><li>使比特币网络相信发送给自己账户的交易是最先发出的</li></ol><p>一旦步骤(1)发生,几分钟后矿工将把这笔交易打包到区块,假设是第270000个区块。大约一个小时以后,在此区块后面将会有五个区块,每个区块间接地指向这笔交易,从而确认这笔交易。这时卖家收到货款,并向买家发货。因为我们假设这是数字商品,攻击者可以即时收到货。现在,攻击者<strong>创建另一笔交易</strong>,将相同的100BTC发送到自己的账户。如果<strong>攻击者只是向全网广播这一消息,这一笔交易不会被处理。矿工会运行状态转换函数APPLY(S,TX),发现这笔交易将花费已经不在状态中的UTXO</strong>。所以,攻击者会对区块链进行<strong>分叉,将第269999个区块作为父区块重新生成第270000个区块,在此区块中用新的交易取代旧的交易</strong>。因为区块数据是不同的,这要求重新进行工作量证明。另外,因为攻击者生成的新的第270000个区块有不同的哈希,所以原来的第270001到第270005的区块不指向它,因此原有的区块链和攻击者的新区块是完全分离的。在发生区块链分叉时,区块链长的分支被认为是诚实的区块链,合法的的矿工将会沿着原有的第270005区块后挖矿,只有攻击者一人在新的第270000区块后挖矿。<strong>攻击者为了使得他的区块链最长,他需要拥有比除了他以外的全网更多的算力来追赶</strong>(即51%攻击)</p></li><li><p>比特币系统的一个重要的可扩展特性是:它的区块存储在多层次的数据结构中。一个区块的哈希实际上只是区块头的哈希,区块头是包含时间戳、随机数、上个区块哈希和存储了所有的区块交易的默克尔树的根哈希的长度大约为200字节的一段数据。(Merkle Trees)</p></li><li><p><strong>任何对于默克尔树(Merkle Trees)的任何部分进行改变的尝试都会最终导致链上某处的不一致</strong>。默克尔树是一种二叉树,由一组叶节点、一组中间节点和一个根节点构成。最下面的大量的叶节点包含基础数据,每个中间节点是它的两个子节点的哈希,根节点也是由它的两个子节点的哈希,代表了默克尔树的顶部。默克尔树的目的是<strong>允许区块的数据可以零散地传送:节点可以从一个源下载区块头,从另外的源下载与其有关的树的其它部分,而依然能够确认所有的数据都是正确的</strong>。之所以如此是因为哈希向上的扩散:如果一个恶意用户尝试在树的下部加入一个伪造的交易,所引起的改动将导致树的上层节点的改动,以及更上层节点的改动,最终导致根节点的改动以及区块哈希的改动,这样协议就会将其记录为一个完全不同的区块(几乎可以肯定是带着不正确的工作量证明的)</p></li></ul><p>#以太坊</p><ul><li><p>状态是由被称为“账户”(每个账户由一个20字节的地址)的对象和在两个账户之间转移价值和信息的状态转换构成的。以太坊的账户包含四个部分:</p><ol><li>随机数(nonce),用于确保每笔交易只能被处理一次的计数器</li><li>账户目前的以太币余额</li><li>账户的合约代码,如果有的话</li><li>账户的存储(默认为空)</li></ol></li><li><p>以太坊有两种类型的账户:外部所有账户(<strong>externally owned accounts</strong>)(由私钥控制)和合约账户(<strong>contract accounts</strong>)(由合约代码控制)</p></li><li><p>以太坊中的<strong>“交易”</strong>(transaction)是指存储从<strong>外部账户</strong>发出的消息的签名数据包,包含:</p><ol><li>The recipient of the message</li><li>A signature identifying the sender</li><li>The amount of ether to transfer from the sender to the recipient</li><li>An optional data field</li><li>A <code>STARTGAS</code> value, representing the maximum number of computational steps the transaction execution is allowed to take</li><li>A <code>GASPRICE</code> value, representing the fee the sender pays per computational step</li></ol><p>前三部分是加密货币中的标准部分。<code>STARTGAS</code> and <code>GASPRICE</code> fields are crucial for Ethereum’s anti-denial of service model。为了防止代码的指数型爆炸和无限循环以及恶意攻击,每笔交易需要对执行代码所引发的计算步骤—包括初始消息和所有执行中引发的消息—做出限制。计算的基本单位就是<em>“gas”</em>。通常一个计算耗费一个“gas”,但是有的操作可能耗费更多的”gas“,因为这些操作意味着需要更多的计算和资源。这样的话,也能防止恶意攻击者,因为操作所需要的花费是和操作成正比的。</p></li><li><p>合约有能力发送<strong>”消息“</strong>给其他合约。”消息“和”交易“相像,除了”消息“是由合约产生的,而不是外部账户。它包含:</p><ol><li>The sender of the message (implicit)</li><li>The recipient of the message</li><li>The amount of ether to transfer alongside the message</li><li>An optional data field</li><li>A <code>STARTGAS</code> value</li></ol><p>A message is produced when a contract currently executing code executes the<code>CALL</code>opcode, which produces and executes a message. Like a transaction, a message leads to the recipient account running its code。</p></li><li><p>以太坊的状态转换函数:<code>APPLY(S,TX) -> S'</code>,可以定义如下:</p><ol><li><p>检查交易的格式是否正确(即有正确数值)、签名是否有效和随机数(nonce)是否与发送者账户的随机数匹配。如否,返回错误。</p></li><li><p>计算交易费用:<code>fee=STARTGAS * GASPRICE</code>,并从签名中确定发送者的地址。从发送者的账户中减去交易费用和增加发送者的随机数。如果账户余额不足,返回错误。</p></li><li><p>设定初值<code>GAS = STARTGAS</code>,并根据交易中的字节数减去一定量的gas。</p></li><li><p>从发送者的账户转移value到接收者账户。如果接收账户不存在,创建此账户。如果接收账户是一个合约,运行合约的代码,直到代码运行结束或者gas用完。</p></li><li><p>如果因为发送者账户没有足够的钱或者代码执行耗尽gas导致value转移失败,恢复原来的状态,但是还需要支付交易费用,交易费用加至矿工账户。</p></li><li><p>否则,将所有剩余的gas归还给发送者,消耗掉的gas作为交易费用发送给矿工。</p><p><img src="https://i.loli.net/2018/04/23/5addad5bb5039.png" alt="ethertransition.png"></p></li></ol></li><li><p>合约的代码由一系列字节构成,每一个字节代表一种操作。一般而言,代码执行是无限循环,程序计数器每增加一(初始值为零)就执行一次操作,直到代码执行完毕或者遇到错误,或者检测到<code>STOP</code>或者<code>RETURN</code>指令。操作可以访问三种存储数据的空间:</p><ol><li><strong>堆栈</strong>,一种后进先出的数据存储,数值可以入栈,出栈</li><li><strong>内存</strong>,可无限扩展的字节队列(byte array)</li><li><strong>合约的长期存储</strong>,一个秘钥/数值的存储。与计算结束即重置的堆栈和内存不同,存储内容将长期保持</li></ol></li><li><p>EVM运行时,其完整的计算状态可以由元组<strong>(<code>block_state, transaction, message, code, memory, stack, pc, gas</code>)</strong>来定义。</p></li><li><p>以太坊的区块链在很多方面类似于比特币区块链。它们的区块链架构的不同在于,以太坊区块不仅包含交易记录和最近的状态,还包含区块序号和难度值。</p></li><li><p>以太坊的区块确认算法如下:</p><ol><li>检查区块引用的上一个区块是否存在和有效。</li><li>检查区块的时间戳是否比引用的上一个区块大,而且小于15分钟。</li><li>检查区块序号、难度值、 交易根,叔根(uncle root)和gas limit(许多以太坊特有的底层概念)是否有效。</li><li>检查区块的工作量证明是否有效。</li><li>将S[0]赋值为上一个区块的STATE_ROOT。</li><li>将TX赋值为区块的交易列表,一共有n笔交易。对于属于0……n-1的i,进行状态转换<code>S[i+1] = APPLY(S[i],TX[i])</code>。如果任何一个转换发生错误,或者程序执行到此处所花费gas超过了<code>GASLIMIT</code>,返回错误。</li><li>用S[n]给S_FINAL赋值, 向矿工支付区块奖励。</li><li>检查<code>S_FINAL</code>是否与<code>STATE_ROOT</code>相同。如果相同,区块是有效的。否则,区块是无效的。(Check if the Merkle tree root of the state <code>S_FINAL</code> is equal to the final state root provided in the block header. If it is, the block is valid; otherwise, it is not valid.)</li></ol><p>这个方法看似效率低,因为它需要存储每个区块的所有状态,但是事实上以太坊的确认效率可以与比特币相提并论。原因是状态存储在树结构中(tree structure),每增加一个区块只需要改变树结构的一小部分。因此,一般而言,两个相邻的区块的树结构的大部分应该是相同的,因此存储一次数据,可以利用指针(即子树哈希)引用两次。A special kind of tree known as a <strong>“Patricia tree” </strong>is used to accomplish this, including a modification to the Merkle tree concept that allows for nodes to be inserted and deleted, and not just changed, efficiently.</p></li></ul><h1 id="应用"><a href="#应用" class="headerlink" title="应用"></a>应用</h1><ul><li><p>以太坊之上有三种应用:</p><ul><li>金融应用,比如子货币,金融衍生品,对冲合约,储蓄钱包,遗嘱,甚至一些种类的全面的雇佣合约</li><li>半金融应用,这里有钱的存在但也有很重的非金钱的方面,一个完美的例子是为解决计算问题而设的自我强制悬赏</li><li>在线投票和去中心化治理这样的完全的非金融应用</li></ul></li><li><p>令牌系统(Token Systems)</p><p>关键的一点是理解,所有的货币或者令牌系统,从根本上来说是一个带有如下操作的数据库:<strong>从A中减去X单位并把X单位加到B上,前提条件是(1)A在交易之前有至少X单位以及(2)交易被A批准</strong>。实施一个令牌系统就是把这样一个逻辑实施到一个合约中去。</p><p>理论上,基于以太坊的充当子货币的令牌系统可能包括一个基于比特币的链上元币所缺乏的重要功能:直接用这种货币支付交易费的能力。实现这种能力的方法是在合约里维护一个以太币账户以用来为发送者支付交易费,通过收集被用来充当交易费用的内部货币并把它们在一个不断运行的拍卖中拍卖掉,合约不断为该以太币账户注资。这样用户需要用以太币“激活”他们的账户,但一旦账户中有以太币它将会被重复使用因为每次合约都会为其充值。</p></li><li><p>去中心化存储</p></li><li><p>去中心化自治组织(DAO, decentralized autonomous organization)</p><p>理论上代码是不可更改的,然而通过把代码主干放在一个单独的合约内并且把合约调用的地址指向一个可更改的存储依然可以容易地绕开障碍而使代码变得可修改</p></li><li><p><strong>A decentralized data feed</strong>. For financial contracts for difference, it may actually be possible to decentralize the data feed via a protocol called <strong>SchellingCoin</strong>. SchellingCoin basically works as follows: N parties all put into the system the value of a given datum (eg. the ETH/USD price), the values are sorted, and everyone between the 25th and 75th percentile gets one token as a reward. <strong>Everyone has the incentive to provide the answer that everyone else will provide, and the only value that a large number of players can realistically agree on is the obvious default: the truth</strong>. This creates a decentralized protocol that can theoretically provide any number of values, including the ETH/USD price, the temperature in Berlin or even the result of a particular hard computation.</p></li></ul><h1 id="杂项和关注"><a href="#杂项和关注" class="headerlink" title="杂项和关注"></a>杂项和关注</h1><ul><li>改进的“幽灵”协议(”Greedy Heaviest Observed Subtree” (GHOST) protocol)<ul><li>幽灵协议提出的动机是因为当前快速确认的块链因为区块的高作废率而受到低安全性困扰。因为废区块不会被认为是有效的,而且矿工也会被认为对网络安全做出贡献。而由此也引发出一个中心化问题,拥有的算力矿池份额越大,其挖矿就更有效率,而且也能控制区块链的产生。</li><li>改进的算法就是,<strong>通过在计算哪条链“最长”的时候把废区块也包含进来</strong>。不仅一个区块的父区块和更早的祖先块,祖先块的作废的后代区块(以太坊术语中称之为“叔区块”)也被加进来以计算哪一个区块拥有支持其的最大工作量证明。也给予“叔区块”以及将其纳入计算的“侄子区块”不同奖励,但交易费用不奖励给叔区块。并且废区块只能以叔区块的身份被其父母的第二代至第五代后辈区块,而不是更远关系的后辈区块(例如父母区块的第六代后辈区块,或祖父区块的第三代后辈区块)纳入计算。</li></ul></li><li>交易费用存在一些瑕疵,作为弥补,以太坊简单地建立了一个浮动地上限:没有区块能够包含比BLK_LIMIT_FACTOR 倍长期指数移动平均值更多的操作数。这里,<code>blk.oplimit = floor((blk.parent.oplimit * (EMAFACTOR - 1) + floor(parent.opcount * BLK_LIMIT_FACTOR)) / EMA_FACTOR)</code>,BLK_LIMIT_FACTOR 和 EMA_FACTOR 是暂且被设为 65536 和 1.5 的常数。</li><li>以太坊的挖矿算法减轻了专用挖矿硬件带来的优势,以及减轻了矿池带来的中心化问题。</li></ul>]]></content>
<summary type="html">
<p>#历史</p>
<ul>
<li><p>拜占庭将军问题</p>
<ul>
<li>如果有N方参与到系统中,那么系统可以容忍N/4的恶意参与者</li>
<li><p>问题在于,在匿名的情况下,系统设置的安全边界容易遭受女巫攻击,因为一个攻击者可以在一台服务器或者僵尸网络上创
</summary>
<category term="ETH" scheme="http://chzhou.cc/tags/ETH/"/>
</entry>
<entry>
<title>algs4 percolation问题</title>
<link href="http://chzhou.cc/2017/11/09/algs4-percolation%E9%97%AE%E9%A2%98/"/>
<id>http://chzhou.cc/2017/11/09/algs4-percolation问题/</id>
<published>2017-11-09T07:39:40.000Z</published>
<updated>2017-11-09T09:40:09.476Z</updated>
<content type="html"><![CDATA[<h3 id="如何判断这个点阵已经percolate了?"><a href="#如何判断这个点阵已经percolate了?" class="headerlink" title="如何判断这个点阵已经percolate了?"></a>如何判断这个点阵已经percolate了?</h3><p>一个比较trick的解决办法就是引入”virtual top”和”virtual bottom”这两个点。如图所示。</p><p><img src="https://i.loli.net/2017/11/09/5a041187dea72.png" alt="vitual top"></p><p>这样,这个系统是否percolate的问题就转换为虚拟顶部结点能否与虚拟底部结点相连。第一行或者最后一行的点被打开的时候就立马与虚拟结点相连。</p><p>###如何解决backwash问题?</p><p>什么是backwash?backwash问题就是由于虚拟结点的存在,本来一些点是不能被认为是full的(也就是说不能连接到顶部),但是由于其能和虚拟底部结点相连,虚拟底部结点又能通过其他点与顶部相连。这样这个的点在判断的时候就会认为是full的。</p><blockquote><p>In the context of Percolation, the backwash issue is that some site might be mistakenly judged as a full site (A full site is an open site that can be connected to an open site in the top row via a chain of neighboring (left, right, up, down) open sites.) if we directly adopt the dummy nodes suggested in the course material, i.e., a top virtual node connected to each site in the first first top row, another bottom virtual node connected to each site in the last bottom row. [看这个博文][<a href="https://www.sigmainfy.com/blog/avoid-backwash-in-percolation.html]" target="_blank" rel="noopener">https://www.sigmainfy.com/blog/avoid-backwash-in-percolation.html]</a></p></blockquote><p><img src="https://www.sigmainfy.com/images/percolation_backwash.png" alt="backwash"></p><p>在解决这个问题的就是引入两个并查集,一个集合里只包含虚拟顶部结点,另一个集合里包括虚拟顶部和底部结点。这样在判断一个点是不是full的时候,看看这个点在两个集合里能不能连接到顶部结点,或者说在只有虚拟顶部结点里的集合里能不能连接到顶部。</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">public</span> <span class="keyword">boolean</span> <span class="title">isFull</span><span class="params">(<span class="keyword">int</span> row, <span class="keyword">int</span> col)</span> </span>{</span><br><span class="line"> validate(row, col); <span class="comment">//判断该坐标合理与否</span></span><br><span class="line"> <span class="keyword">int</span> q = xyTo1d(row, col); <span class="comment">//将二维坐标转换为一维的数组坐标</span></span><br><span class="line"></span><br><span class="line"> <span class="keyword">if</span> (idOnlyTop.connected(q, <span class="number">0</span>)) { <span class="comment">// id.find(0) == idOnlyTop.find(q)</span></span><br><span class="line"> <span class="keyword">return</span> <span class="keyword">true</span>;</span><br><span class="line"> } </span><br><span class="line"> <span class="keyword">return</span> <span class="keyword">false</span>;</span><br><span class="line"> }</span><br></pre></td></tr></table></figure><p>###如何存储结点的开关与否信息?</p><p>刚开始做的时候不知道该怎么存储一个结点的开关信息,想了一些办法,总觉得很麻烦。之后通过在晚上查询才得出可以直接创建一个boolean类型的数组,这个点被打开的时候就记该值为true,反之为false。而且要记住刚开始定义的时候这个数组的值就全是false的。</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">private</span> <span class="keyword">boolean</span>[] state; <span class="comment">//先定义存储数组开关信息的boolean数组类型</span></span><br><span class="line"></span><br><span class="line">state = <span class="keyword">new</span> <span class="keyword">boolean</span>[n * n + <span class="number">1</span>]; <span class="comment">//进行定义</span></span><br><span class="line"></span><br><span class="line"><span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">open</span><span class="params">(<span class="keyword">int</span> row, <span class="keyword">int</span> col)</span> </span>{</span><br><span class="line"> validate(row, col);</span><br><span class="line"> <span class="keyword">int</span> i = xyTo1d(row, col);</span><br><span class="line"> state[i] = <span class="keyword">true</span>;</span><br><span class="line"> count++;</span><br><span class="line"> <span class="comment">/* 之后的代码就是把这个点打开后与附近同样打开的点union的过程 */</span></span><br><span class="line">}</span><br></pre></td></tr></table></figure><h3 id="其他的一些trick"><a href="#其他的一些trick" class="headerlink" title="其他的一些trick"></a>其他的一些trick</h3><ul><li>因为程序要求在每个输入的时候,对于不合法的输入要抛出异常,所以可以单独建立validate()函数,在每个接受输入的类里第一句就运行这个函数,这样能及时抛出异常。</li><li>输入的时候,输入的是这个点的二维坐标,但是在实际存储的时候所有的点都是在一维数组里存储着,所以可以先建立一个xyTo1d()函数,这样就能快速转换。而不是每次都进行计算。</li></ul><p>理解模块化。对于一些常用到的过程进行封装,成为函数,然后直接通过接口进行引用。代码简洁易懂。而且在写代码的时候也有条例。</p>]]></content>
<summary type="html">
<h3 id="如何判断这个点阵已经percolate了?"><a href="#如何判断这个点阵已经percolate了?" class="headerlink" title="如何判断这个点阵已经percolate了?"></a>如何判断这个点阵已经percolate了?</
</summary>
<category term="algs4" scheme="http://chzhou.cc/tags/algs4/"/>
</entry>
<entry>
<title>algs4第一周 一点时间复杂度</title>
<link href="http://chzhou.cc/2017/11/08/algs4%E7%AC%AC%E4%B8%80%E5%91%A8%20%E4%B8%80%E7%82%B9%E6%97%B6%E9%97%B4%E5%A4%8D%E6%9D%82%E5%BA%A6/"/>
<id>http://chzhou.cc/2017/11/08/algs4第一周 一点时间复杂度/</id>
<published>2017-11-08T13:59:00.000Z</published>
<updated>2017-11-09T09:40:32.910Z</updated>
<content type="html"><![CDATA[<p>###Quick-find中union( ) 操作性能分析</p><p>书中说明每次union( ) 操作访问数组的次数是(n+3)~ (2n+1)之间。首先先看一下代码块。</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br><span class="line">9</span><br><span class="line">10</span><br><span class="line">11</span><br><span class="line">12</span><br><span class="line">13</span><br><span class="line">14</span><br><span class="line">15</span><br></pre></td><td class="code"><pre><span class="line"><span class="function"><span class="keyword">public</span> <span class="keyword">void</span> <span class="title">union</span><span class="params">(<span class="keyword">int</span> p, <span class="keyword">int</span> q)</span> </span>{</span><br><span class="line"> <span class="keyword">int</span> pid = find(p);</span><br><span class="line"> <span class="keyword">int</span> qid = find(q);</span><br><span class="line"> </span><br><span class="line"> <span class="keyword">if</span> (pid == qid) {</span><br><span class="line"> <span class="keyword">return</span>;</span><br><span class="line"> }</span><br><span class="line"> </span><br><span class="line"> <span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i < id.length; i++) {</span><br><span class="line"> <span class="keyword">if</span> (id[i] == pid) {</span><br><span class="line"> id[i] = qid;</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line"> count--;</span><br><span class="line">}</span><br></pre></td></tr></table></figure><ol><li>每次调用union()总会调用find()两次,这样的话会访问数组2次。</li><li>循环里for循环会执行n次,在判断里会访问数组n次。</li><li>在union()操作中,至少会有一个数的会被改变,那么就是1次;而最多除了q之外所有的数都要和q连接,那么的话就会有n-1个数的值被改变,就是n-1次。</li></ol><p>综上,union()操作访问数组的次数在(2+n+1)~(2+n+n-1)也就是(n+3)~(2n+1)次。</p><p>###一个三层嵌套时间复杂度另类数学求法</p><p>从一组数里找出三个数之和为0的组合。代码如下。</p><figure class="highlight java"><table><tr><td class="gutter"><pre><span class="line">1</span><br><span class="line">2</span><br><span class="line">3</span><br><span class="line">4</span><br><span class="line">5</span><br><span class="line">6</span><br><span class="line">7</span><br><span class="line">8</span><br></pre></td><td class="code"><pre><span class="line"><span class="keyword">for</span> (<span class="keyword">int</span> i = <span class="number">0</span>; i < n; i++) {</span><br><span class="line"> <span class="keyword">for</span> (<span class="keyword">int</span> j = i + <span class="number">1</span>; j < n; j++) {</span><br><span class="line"> <span class="keyword">for</span> (<span class="keyword">int</span> k = j + <span class="number">1</span>; k < n; k++) {</span><br><span class="line"> <span class="keyword">if</span> (a[i] + a[j] + a[k] == <span class="number">0</span>) {</span><br><span class="line"> cnt++;</span><br><span class="line"> }</span><br><span class="line"> }</span><br><span class="line">}</span><br></pre></td></tr></table></figure><p>仅从数学分析,那么该数学模型就是从n个数中抽出3个数,看有几个组合。根据排列组合知识可知,为$ C_n^3$, 展开即为$\frac{n^3}{6}-\frac{n^2}{2}+\frac{n}{3}$。所以时间复杂度为O($n^3$)。</p>]]></content>
<summary type="html">
<p>###Quick-find中union( ) 操作性能分析</p>
<p>书中说明每次union( ) 操作访问数组的次数是(n+3)~ (2n+1)之间。首先先看一下代码块。</p>
<figure class="highlight java"><table><tr><t
</summary>
<category term="algs4" scheme="http://chzhou.cc/tags/algs4/"/>
<category term="算法" scheme="http://chzhou.cc/tags/%E7%AE%97%E6%B3%95/"/>
</entry>
</feed>