Rust Logorayon-core

Rayon is a popular Rust library for data parallelism, making it easy to convert sequential computations into parallel ones. While most users interact with the high-level `rayon` crate, `rayon-core` is its foundational, low-level component. It provides the core parallel primitives and thread pool management that `rayon` builds upon.

What is rayon-core?
`rayon-core` is the internal dependency of the main `rayon` crate. It's responsible for the underlying machinery of Rayon's parallelism, including:
* Thread Pool Management: It manages the worker threads that execute parallel tasks. This includes creating the threads, handling their lifecycle, and implementing a highly efficient work-stealing scheduler to distribute tasks dynamically among them.
* Low-Level Parallel Primitives: It exposes fundamental building blocks for parallelism, such as `join` (for dividing work into two independent sub-tasks) and `spawn_fifo` (for spawning tasks that are executed in a FIFO manner, useful for certain scheduling patterns).
* Context Management: It handles how parallel tasks get scheduled onto the available threads within a specific `ThreadPool` instance, allowing for the temporary installation of custom thread pools.

Relationship to the `rayon` Crate:
The main `rayon` crate provides an ergonomic, high-level API primarily through parallel iterators (e.g., `vec.par_iter()`). This API abstracts away the complexities of thread management and task scheduling. Under the hood, however, all these parallel operations leverage the `ThreadPool` and primitives provided by `rayon-core`.

Why use `rayon-core` directly?
Direct interaction with `rayon-core` is less common for general application development but becomes necessary for specific advanced scenarios:
* Custom Thread Pools: When you need explicit control over the thread pool's configuration (e.g., a specific number of threads, custom thread names, custom stack sizes) or when you want to run different parts of your application with separate, isolated thread pools.
* Advanced Scheduling: For highly specialized use cases requiring custom scheduling logic or integrating Rayon's work-stealing scheduler with existing asynchronous runtimes or other execution environments.
* Building Custom Parallel Abstractions: If you are developing your own parallel library or framework and want to leverage Rayon's efficient work-stealing scheduler as a backend without exposing the full `rayon` parallel iterator API.

Core Concepts in `rayon-core`:
* `ThreadPool`: The central structure representing a collection of worker threads managed by Rayon's scheduler.
* `ThreadPoolBuilder`: Used to configure and create `ThreadPool` instances with specific parameters.
* `ThreadPool::install()`: A critical method that temporarily makes a specific `ThreadPool` the active one for any Rayon-based operations (including those from the high-level `rayon` crate) within the closure passed to `install`.
* `rayon_core::join()`: A fundamental primitive for divide-and-conquer parallelism. It executes two closures, potentially in parallel, and waits for both to complete, returning their results.
* `rayon_core::spawn_fifo()`: Spawns a task that is pushed onto the current thread's local deque or the global deque in a FIFO manner. This is useful for tasks that might block or for maintaining submission order.

In summary, `rayon-core` is the powerful engine beneath the user-friendly `rayon` interface. It's the go-to for precise control over Rayon's parallelism execution.

Example Code

```rust
use rayon_core::{ThreadPoolBuilder, ThreadPool};
use rayon::prelude::*;
use std::sync::{Arc, Mutex};
use std::thread;
use std::time::Duration;

fn main() {
    println!("Demonstrating rayon-core for custom thread pool management and low-level primitives.\n");

    // 1. Create a custom ThreadPool using rayon-core::ThreadPoolBuilder
    // We'll create a pool with 2 threads for demonstration to clearly see parallelism.
    let pool = ThreadPoolBuilder::new()
        .num_threads(2) // Limit the number of threads for observation
        .thread_name(|i| format!("my-rayon-worker-{}", i)) // Custom thread names
        .build()
        .expect("Failed to build thread pool");

    println!("Custom Rayon-core ThreadPool created with {} worker threads.", pool.current_num_threads());

    let shared_results = Arc::new(Mutex::new(Vec::<(usize, Option<usize>, thread::ThreadId)>::new()));

    // 2. Install the custom ThreadPool and execute work within its context
    // Any `rayon` (high-level) or `rayon_core` operations within this closure
    // will use this specific `pool`.
    pool.install(|| {
        println!("\n--- Inside the custom ThreadPool context ---");
        println!("Current (main) thread in Rayon context: {:?}", rayon::current_thread_index());

        // Demonstrate a simple parallel operation using rayon's high-level API
        // This operation will now utilize the `pool` we just installed.
        println!("Executing parallel iterator tasks:");
        (0..5).into_par_iter().for_each(|i| {
            let rayon_thread_idx = rayon::current_thread_index();
            let os_thread_id = thread::current().id();
            println!(
                "  Task {} executing on Rayon thread {:?} (OS thread ID: {:?}, name: {:?})",
                i,
                rayon_thread_idx,
                os_thread_id,
                thread::current().name()
            );
            thread::sleep(Duration::from_millis(100)); // Simulate some work
            let mut results = shared_results.lock().unwrap();
            results.push((i, rayon_thread_idx, os_thread_id));
        });

        // 3. Demonstrating a low-level primitive: `rayon_core::join`
        println!("\nExecuting rayon_core::join tasks:");
        let (result_left, result_right) = rayon_core::join(
            || {
                let rayon_idx = rayon::current_thread_index();
                let os_id = thread::current().id();
                println!("  Left join task on Rayon thread {:?} (OS thread ID: {:?})", rayon_idx, os_id);
                thread::sleep(Duration::from_millis(50));
                100
            },
            || {
                let rayon_idx = rayon::current_thread_index();
                let os_id = thread::current().id();
                println!("  Right join task on Rayon thread {:?} (OS thread ID: {:?})", rayon_idx, os_id);
                thread::sleep(Duration::from_millis(150));
                200
            },
        );
        println!("rayon_core::join results: (Left: {}, Right: {})", result_left, result_right);

    }); // The custom pool is implicitly uninstalled here, and the default (global) pool, if any, becomes active again.

    println!("\n--- Outside the custom ThreadPool context ---");
    println!("Final results from parallel iterator tasks (order might vary):");
    let final_results = shared_results.lock().unwrap();
    for (task_id, rayon_idx, os_id) in final_results.iter() {
        println!("  Task {} handled by Rayon thread {:?} (OS ID: {:?})", task_id, rayon_idx, os_id);
    }

    // If you try to use rayon's high-level API here without another pool
    // being installed (e.g., the default global pool), it would use the global
    // default Rayon thread pool if initialized, or initialize it implicitly.
    // If no pool is active, `rayon::current_thread_index()` might panic or return `None` depending on context.
    println!("\nMain thread continuing after custom pool execution.");
}
```