@@ -153,8 +153,8 @@ run asynchronously, they are processed concurrently, thus faster.
153153To illustrate the difference, the following code distributes some random data
154154and then synchronizes correctly, but is essentially serial:
155155``` julia
156- julia> @time for i in workers ()
157- fetch (save_at (i , :x , :(randn (10000 ,10000 ))))
156+ julia> @time for w in workers ()
157+ fetch (save_at (w , :x , :(randn (10000 ,10000 ))))
158158 end
159159 1.073267 seconds (346 allocations: 12.391 KiB)
160160```
@@ -164,7 +164,8 @@ make the code parallel, and usually a few times faster (depending on the number
164164of workers):
165165
166166``` julia
167- julia> @time fetch .([save_at (i, :x , :(randn (10000 ,10000 ))) for i in workers ()])
167+ julia> @time map (fetch, [save_at (w, :x , :(randn (10000 ,10000 )))
168+ for w in workers ()])
168169 0.403235 seconds (44.50 k allocations: 2.277 MiB)
1691703 - element Array{Nothing,1 }:
170171nothing
@@ -175,8 +176,8 @@ The same is applicable for retrieving the sub-results in parallel. This example
175176demonstrates that multiple workers can do some work at the same time:
176177
177178``` julia
178- julia> @time fetch .( [get_from (i, :(begin sleep (1 ); myid (); end ))
179- for i in workers ()])
179+ julia> @time map (fetch, [get_from (i, :(begin sleep (1 ); myid (); end ))
180+ for i in workers ()])
180181 1.027651 seconds (42.26 k allocations: 2.160 MiB)
1811823 - element Array{Int64,1 }:
182183 2
@@ -211,7 +212,8 @@ of individual workers. The storage of the variables is otherwise same as with
211212the basic data-moving function -- you can e.g. manually check the size of the
212213resulting slices on each worker using ` get_from ` :
213214``` julia
214- julia> fetch .([get_from (w, :(size ($ (dataset. val)))) for w in dataset. workers])
215+ julia> map (fetch, [get_from (w, :(size ($ (dataset. val))))
216+ for w in dataset. workers])
2152173 - element Array{Tuple{Int64,Int64},1 }:
216218 (333 , 3 )
217219 (333 , 3 )
0 commit comments