Advanced WaitGroup Patterns and Techniques
While the basic usage of sync.WaitGroup
is straightforward, there are more advanced patterns and techniques that can help you build more robust and flexible concurrent applications. In this section, we'll explore some of these patterns and techniques.
Limiting Concurrency with WaitGroup
One common use case for WaitGroup
is to limit the number of concurrent operations. This can be useful when you have a limited set of resources (e.g., database connections, API rate limits) that you don't want to overwhelm.
Here's an example of using WaitGroup
to limit the number of concurrent HTTP requests:
package main
import (
"fmt"
"net/http"
"sync"
)
func fetchURL(url string, wg *sync.WaitGroup, sem chan struct{}) {
defer wg.Done()
// Acquire a slot in the semaphore
sem <- struct{}{}
defer func() { <-sem }()
resp, err := http.Get(url)
if err != nil {
fmt.Printf("Error fetching %s: %v\n", url, err)
return
}
defer resp.Body.Close()
fmt.Printf("Fetched %s\n", url)
}
func main() {
var wg sync.WaitGroup
const maxConcurrency = 5
// Create a semaphore channel to limit concurrency
sem := make(chan struct{}, maxConcurrency)
urls := []string{
"
"
"
"
"
"
"
}
for _, url := range urls {
wg.Add(1)
go fetchURL(url, &wg, sem)
}
wg.Wait()
fmt.Println("All URLs fetched.")
}
In this example, we use a buffered channel sem
as a semaphore to limit the number of concurrent HTTP requests to maxConcurrency
(in this case, 5). Each goroutine that calls fetchURL
must acquire a slot in the semaphore before making the request, and releases the slot when the request is complete. This ensures that we don't overwhelm the system with too many concurrent requests.
Handling Errors and Cancellation with WaitGroup
When working with WaitGroup
in more complex scenarios, it's important to consider how to handle errors and cancellation. One approach is to use a context.Context
to propagate cancellation signals to the goroutines.
Here's an example that demonstrates how to use context.Context
with WaitGroup
to handle errors and cancellation:
package main
import (
"context"
"fmt"
"sync"
"time"
)
func processItem(ctx context.Context, item int, wg *sync.WaitGroup) {
defer wg.Done()
select {
case <-ctx.Done():
fmt.Printf("Cancelled processing item %d\n", item)
return
default:
// Simulate processing the item
fmt.Printf("Processing item %d\n", item)
time.Sleep(time.Second)
}
}
func main() {
var wg sync.WaitGroup
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
for i := 0; i < 10; i++ {
wg.Add(1)
go processItem(ctx, i, &wg)
}
wg.Wait()
fmt.Println("All items processed.")
}
In this example, we create a context.Context
with a 5-second timeout. We then pass this context to each goroutine that calls processItem
. If the context is canceled (either due to the timeout or by calling cancel()
), the goroutines will receive the cancellation signal and exit gracefully.
By using context.Context
with WaitGroup
, you can build more robust concurrent applications that can handle errors and cancellation scenarios more effectively.