Parallel tasks in Python: concurrent.futures

Parallel tasks in Python: concurrent.futures


concurrent.futures is part of the standard library in Python 3.2+. If you're using an older version of Python, you need to install the futures package.

$ pip install futures


You should use the ProcessPoolExecutor for CPU intensive tasks and the ThreadPoolExecutor is suited for network operations or I/O. The ProcessPoolExecutor uses the multiprocessing module, which is not affected by GIL (Global Interpreter Lock) but also means that only picklable objects can be executed and returned.

In Python 3.5+, map() receives an optional argument: chunksize. For very long iterables, using a large value for chunksize can significantly improve performance compared to the default size of 1. With ThreadPoolExecutor, chunksize has no effect.

from concurrent.futures import ThreadPoolExecutor
import time

import requests

def fetch(a):
    url = '{0}'.format(a)
    r = requests.get(url)
    result = r.json()['args']
    return result

start = time.time()

# if max_workers is None or not given, it will default to the number of processors, multiplied by 5
with ThreadPoolExecutor(max_workers=None) as executor:
    for result in, range(30)):
        print('response: {0}'.format(result))

print('time: {0}'.format(time.time() - start))


executor.submit() and as_completed()

executor.submit() returns a Future object. A Future is basically an object that encapsulates an asynchronous execution of a function that will finish (or raise an exception) in the future.

The main difference between map and as_completed is that map returns the results in the order in which you pass iterables. On the other hand, the first result from the as_completed function is from whichever future completed first. Besides, iterating a map() returns results of futures; iterating a as_completed(futures) returns futures themselves.

from concurrent.futures import ThreadPoolExecutor, as_completed

import requests

def fetch(url, timeout):
    r = requests.get(url, timeout=timeout)
    data = r.json()['args']
    return data

with ThreadPoolExecutor(max_workers=10) as executor:
    futures = {}
    for i in range(42):
        url = '{0}'.format(i)
        future = executor.submit(fetch, url, 60)
        futures[future] = url

    for future in as_completed(futures):
        url = futures[future]
            data = future.result()
        except Exception as exc:
            print('fetch {0}, get {1}'.format(url, data))


goroutine, channel


如果是沒有 buffer 的 channel
讀取 channel(value <- ch)會 block 當前的 goroutine,直到別的 goroutine 寫入數據(ch <- 1)
寫入 channel(ch <- 1)也會 block 當前的 goroutine,直到別的 goroutine 接收數據(value <- ch)
main() 其實也是個 goroutine

runtime.GOMAXPROCS(1) 的情況下(也是默認的情況)
同一時間只會有一個 goroutine 在跑
當 goroutine 遇到阻塞(例如 IO, time.Sleep())時
就會讓出 CPU 給別的 goroutine(相當於執行了 runtime.Gosched
也就是說如果某個 goroutine 沒有遇到阻塞
它就不會讓出執行權給其他的 goroutine
而是會一直執行到 return

func say(s string) {
    for i := 0; i < 5; i++ {

# 默認只會用到一個 CPU(只在一個 thread 裡跑)
# main() 這個 goroutine 被這個無限迴圈的 for loop 佔滿了(無限迴圈不算是阻塞)
# 沒有機會把執行權交給其他的 goroutine
func main() {
    go say("something")
    for {


buffered channel

# 默認是沒有 buffer 的 channel
# 只要一有數據存入,在數據沒被取出之前,channel 都會阻塞
ch0 := make(chan int)

# 容量為 2 的 channel,可以想成是一個 queue
# 在 queue 滿之前,沒有 goroutine 會被阻塞
ch2 := make(chan int, 2)


Go 默認只會使用一個 CPU 來執行 goroutine
不會依據你的 CPU 數來切換
但是你可以用環境變數 GOMAXPROCS 指定

import "runtime"



WaitGroups are more useful for doing different tasks in parallel.

package main

import (

func main() {
    var wg sync.WaitGroup

    var aww string
    go func() {
        defer wg.Done()
        aww = fetch("")

    var funny string
    go func() {
        defer wg.Done()
        funny = fetch("")


    fmt.Println("aww:", aww)
    fmt.Println("funny:", funny)

func fetch(url string) string {
    res, err := http.Get(url)
    if err != nil {
    body, err := ioutil.ReadAll(res.Body)
    if err != nil {
    return string(body)


fatal error: all goroutines are asleep - deadlock!

func main() {
    ch := make(chan int)
    <- ch

因為沒有任何人可以向 ch 寫入數據
所以 main() 的 goroutine 會一直阻塞