A Ray Tracer is a rendering technique in computer graphics that generates an image by simulating the path of light (or more accurately, viewing rays) as they interact with objects in a virtual 3D scene. It works by tracing rays from the viewer's eye (camera) through each pixel on an imaginary screen into the scene, rather than from light sources, making it a 'backwards' approach.
Core Principles:
1. Ray Generation: For every pixel on the image plane, a primary ray is cast from the camera's origin through the center of that pixel into the 3D scene. The image resolution determines the number of rays. For a 1920x1080 image, 2,073,600 primary rays are typically cast.
2. Intersection Testing: Each primary ray is tested for intersection with every object in the scene. The goal is to find the *closest* object that the ray hits. If no object is hit, the pixel receives a background color.
3. Shading and Lighting: If an intersection is found, the color of that pixel is determined. This involves calculating how light interacts with the object's surface at the precise intersection point. A lighting model (e.g., Phong, physically based rendering) considers factors like:
* Ambient Light: A general, non-directional light that provides uniform illumination, preventing completely black areas.
* Diffuse Light: Light scattered uniformly in all directions from the surface. Its intensity depends on the angle between the surface normal (a vector perpendicular to the surface) and the direction to the light source.
* Specular Light: A highlight reflection that mimics shiny surfaces. Its intensity depends on the angle between the view direction and the reflected light direction.
* Shadows: To determine if a point is in shadow, secondary rays (called shadow rays) are cast from the intersection point towards each light source. If a shadow ray hits another object before reaching the light, the point is in shadow for that specific light source.
4. Recursive Ray Tracing (Advanced): For highly realistic effects such as reflections and refractions, secondary rays are recursively spawned from the intersection point:
* Reflection Rays: A new ray is cast in the reflection direction. The color contribution from this reflected ray is added to the pixel's color, weighted by the material's reflectivity.
* Refraction Rays: For transparent objects (like glass or water), a new ray is cast into or out of the object, bending according to Snell's law and the material's refractive index. This accounts for distortion.
This recursion continues for a predetermined maximum depth or until no more intersections occur, accumulating color contributions at each step.
5. Color Accumulation: The color contributions from all light sources and any spawned secondary rays are combined to determine the final color of the pixel.
Advantages:
* High Realism: Ray tracing inherently produces accurate reflections, refractions, shadows, soft shadows (with modifications), and other optical phenomena with relative ease compared to other rendering methods.
* Physical Accuracy: It simulates light transport more closely to physical reality, leading to more believable images.
Disadvantages:
* Computational Cost: Intersection testing for every ray with every object in the scene can be extremely computationally expensive, especially for complex scenes with many objects or high image resolutions. This leads to longer rendering times. Optimization techniques like spatial partitioning (e.g., Bounding Volume Hierarchies, k-d trees) are crucial to make it practical.
Basic Algorithm Flow:
```
For each pixel (x, y) on the screen:
Generate a primary ray 'r' from the camera through (x, y).
Initialize 'closest_t = infinity', 'hit_object = null'.
For each object 'obj' in the scene:
If 'r' intersects 'obj' at parameter 't':
If 't < closest_t' and 't > epsilon' (to avoid self-intersection):
'closest_t = t'
'hit_object = obj'
If 'hit_object' is not null:
'hit_point = r.origin + r.direction * closest_t'
'normal = hit_object.get_normal(hit_point)'
'color = calculate_lighting(hit_point, normal, hit_object.material, lights, camera_position)'
Else:
'color = background_color'
Set pixel (x, y) to 'color'.
```
Example Code
```rust
use std::fs::File;
use std::io::{self, Write};
use std::f64::INFINITY;
// --- Vec3: Represents a 3D vector, point, or RGB color ---
#[derive(Debug, Copy, Clone)]
pub struct Vec3 {
pub x: f64,
pub y: f64,
pub z: f64,
}
impl Vec3 {
pub fn new(x: f64, y: f64, z: f64) -> Vec3 {
Vec3 { x, y, z }
}
pub fn length_squared(&self) -> f64 {
self.x * self.x + self.y * self.y + self.z * self.z
}
pub fn length(&self) -> f64 {
self.length_squared().sqrt()
}
pub fn normalized(&self) -> Vec3 {
let len = self.length();
if len == 0.0 {
*self // Avoid division by zero
} else {
*self / len
}
}
pub fn dot(&self, other: &Vec3) -> f64 {
self.x * other.x + self.y * other.y + self.z * other.z
}
// Clamps color components to [0, 1] for proper display
pub fn clamp(&self) -> Vec3 {
Vec3::new(self.x.max(0.0).min(1.0), self.y.max(0.0).min(1.0), self.z.max(0.0).min(1.0))
}
}
// Implement basic arithmetic for Vec3
impl std::ops::Add for Vec3 {
type Output = Vec3;
fn add(self, other: Vec3) -> Vec3 {
Vec3::new(self.x + other.x, self.y + other.y, self.z + other.z)
}
}
impl std::ops::Sub for Vec3 {
type Output = Vec3;
fn sub(self, other: Vec3) -> Vec3 {
Vec3::new(self.x - other.x, self.y - other.y, self.z - other.z)
}
}
impl std::ops::Mul<f64> for Vec3 {
type Output = Vec3;
fn mul(self, scalar: f64) -> Vec3 {
Vec3::new(self.x * scalar, self.y * scalar, self.z * scalar)
}
}
impl std::ops::Mul<Vec3> for Vec3 { // Component-wise multiplication for colors
type Output = Vec3;
fn mul(self, other: Vec3) -> Vec3 {
Vec3::new(self.x * other.x, self.y * other.y, self.z * other.z)
}
}
impl std::ops::Div<f64> for Vec3 {
type Output = Vec3;
fn div(self, scalar: f64) -> Vec3 {
Vec3::new(self.x / scalar, self.y / scalar, self.z / scalar)
}
}
// --- Ray: Origin and Direction ---
#[derive(Debug, Copy, Clone)]
pub struct Ray {
pub origin: Vec3,
pub direction: Vec3,
}
impl Ray {
pub fn new(origin: Vec3, direction: Vec3) -> Ray {
Ray { origin, direction: direction.normalized() }
}
// Get point along the ray at parameter t
pub fn at(&self, t: f64) -> Vec3 {
self.origin + self.direction * t
}
}
// --- Material: Basic color and lighting properties ---
#[derive(Debug, Copy, Clone)]
pub struct Material {
pub color: Vec3,
pub ambient: f64,
pub diffuse: f64,
pub specular: f64,
pub shininess: f64,
}
impl Material {
pub fn new(color: Vec3, ambient: f64, diffuse: f64, specular: f64, shininess: f64) -> Material {
Material { color, ambient, diffuse, specular, shininess }
}
}
// --- Sphere: Geometric object ---
#[derive(Debug, Copy, Clone)]
pub struct Sphere {
pub center: Vec3,
pub radius: f64,
pub material: Material,
}
impl Sphere {
pub fn new(center: Vec3, radius: f64, material: Material) -> Sphere {
Sphere { center, radius, material }
}
// Intersect a ray with the sphere
// Returns the closest positive 't' value if an intersection occurs
pub fn intersect(&self, ray: &Ray) -> Option<f64> {
let oc = ray.origin - self.center; // Vector from ray origin to sphere center
let a = ray.direction.dot(&ray.direction);
let b = 2.0 * oc.dot(&ray.direction);
let c = oc.dot(&oc) - self.radius * self.radius;
let discriminant = b * b - 4.0 * a * c;
if discriminant < 0.0 {
None // No real intersection points
} else {
let t1 = (-b - discriminant.sqrt()) / (2.0 * a);
let t2 = (-b + discriminant.sqrt()) / (2.0 * a);
// Find the smallest positive t value (closest intersection in front of the ray)
if t1 > 0.001 { // Use a small epsilon to avoid self-intersection artifacts
Some(t1)
} else if t2 > 0.001 {
Some(t2)
} else {
None // Both intersections are behind the ray origin
}
}
}
// Get surface normal at a given point on the sphere
pub fn normal_at(&self, point: Vec3) -> Vec3 {
(point - self.center).normalized()
}
}
// --- Light: Simple point light source ---
#[derive(Debug, Copy, Clone)]
pub struct PointLight {
pub position: Vec3,
pub color: Vec3, // Light intensity/color
}
impl PointLight {
pub fn new(position: Vec3, color: Vec3) -> PointLight {
PointLight { position, color }
}
}
// --- Scene: Holds objects, lights, and global lighting ---
pub struct Scene {
pub spheres: Vec<Sphere>,
pub lights: Vec<PointLight>,
pub ambient_light: Vec3,
pub background_color: Vec3,
}
impl Scene {
pub fn new(ambient_light: Vec3, background_color: Vec3) -> Scene {
Scene {
spheres: Vec::new(),
lights: Vec::new(),
ambient_light,
background_color,
}
}
pub fn add_sphere(&mut self, sphere: Sphere) {
self.spheres.push(sphere);
}
pub fn add_light(&mut self, light: PointLight) {
self.lights.push(light);
}
// Find the closest intersection for a given ray in the scene
pub fn hit(&self, ray: &Ray) -> Option<(f64, &Sphere)> {
let mut closest_t = INFINITY;
let mut hit_sphere: Option<&Sphere> = None;
for sphere in &self.spheres {
if let Some(t) = sphere.intersect(ray) {
if t < closest_t {
closest_t = t;
hit_sphere = Some(sphere);
}
}
}
hit_sphere.map(|s| (closest_t, s))
}
// Calculate the color for a pixel based on the ray and scene interaction
pub fn trace_ray(&self, ray: &Ray) -> Vec3 {
if let Some((t, hit_sphere)) = self.hit(ray) {
let hit_point = ray.at(t);
let normal = hit_sphere.normal_at(hit_point);
let material = hit_sphere.material;
let mut final_color = self.ambient_light * material.color * material.ambient;
for light in &self.lights {
let light_dir = (light.position - hit_point).normalized();
let light_distance = (light.position - hit_point).length();
// Shadow check: Cast a ray from hit_point to light source
// Nudge origin slightly along normal to avoid self-shadowing due to floating point inaccuracies
let shadow_ray = Ray::new(hit_point + normal * 0.001, light_dir);
let mut in_shadow = false;
// Check if any other object blocks the light path
for other_sphere in &self.spheres {
// Don't check against the sphere that was just hit
// Using pointer comparison for identity, which is safe in this context.
if other_sphere as *const _ == hit_sphere as *const _ {
continue;
}
if let Some(shadow_t) = other_sphere.intersect(&shadow_ray) {
if shadow_t < light_distance { // If intersection occurs before the light source
in_shadow = true;
break;
}
}
}
if !in_shadow {
// Diffuse lighting: depends on angle between normal and light direction
let diffuse_intensity = normal.dot(&light_dir).max(0.0);
final_color = final_color + (light.color * material.color * material.diffuse) * diffuse_intensity;
// Specular lighting: simulates highlights
let view_dir = (ray.origin - hit_point).normalized();
let reflect_dir = (normal * (2.0 * normal.dot(&light_dir))) - light_dir; // Reflection formula
let specular_intensity = view_dir.dot(&reflect_dir).max(0.0).powf(material.shininess);
final_color = final_color + light.color * material.specular * specular_intensity;
}
}
final_color.clamp() // Ensure color components are within [0, 1]
} else {
self.background_color // No object hit, return background color
}
}
}
// --- Main rendering function ---
pub fn render() -> io::Result<()> {
// Image dimensions
let aspect_ratio = 16.0 / 9.0;
let image_width: u32 = 800;
let image_height: u32 = (image_width as f64 / aspect_ratio) as u32;
// Camera settings (a simple 'look-at' setup effectively at origin looking towards -Z)
let viewport_height = 2.0;
let viewport_width = aspect_ratio * viewport_height;
let focal_length = 1.0; // Distance from camera to image plane
let origin = Vec3::new(0.0, 0.0, 0.0); // Camera origin
let horizontal = Vec3::new(viewport_width, 0.0, 0.0);
let vertical = Vec3::new(0.0, viewport_height, 0.0);
let lower_left_corner = origin - horizontal / 2.0 - vertical / 2.0 - Vec3::new(0.0, 0.0, focal_length);
// Scene setup
let mut scene = Scene::new(
Vec3::new(0.1, 0.1, 0.1), // Global ambient light
Vec3::new(0.5, 0.7, 1.0) // Background color (sky blue)
);
// Define materials
let mat_red = Material::new(Vec3::new(1.0, 0.0, 0.0), 0.1, 0.7, 0.5, 32.0);
let mat_green = Material::new(Vec3::new(0.0, 1.0, 0.0), 0.1, 0.7, 0.5, 32.0);
let mat_blue = Material::new(Vec3::new(0.0, 0.0, 1.0), 0.1, 0.7, 0.5, 32.0);
let mat_ground = Material::new(Vec3::new(0.8, 0.8, 0.8), 0.1, 0.9, 0.0, 1.0); // Dull, less specular ground
// Add spheres to the scene
scene.add_sphere(Sphere::new(Vec3::new(0.0, 0.0, -1.0), 0.5, mat_red));
scene.add_sphere(Sphere::new(Vec3::new(-1.0, 0.0, -1.0), 0.5, mat_green));
scene.add_sphere(Sphere::new(Vec3::new(1.0, 0.0, -1.0), 0.5, mat_blue));
scene.add_sphere(Sphere::new(Vec3::new(0.0, -100.5, -1.0), 100.0, mat_ground)); // Large sphere for ground plane effect
// Add point lights to the scene
scene.add_light(PointLight::new(Vec3::new(-2.0, 1.0, 0.0), Vec3::new(1.0, 1.0, 1.0))); // White light
scene.add_light(PointLight::new(Vec3::new(2.0, 2.0, 0.0), Vec3::new(0.8, 0.8, 0.5))); // Warm yellow light
// Output to PPM (Portable PixMap) file format
let mut file = File::create("output.ppm")?;
writeln!(file, "P3")?;
writeln!(file, "{} {}", image_width, image_height)?;
writeln!(file, "255")?;
// Iterate over each pixel, casting rays and determining color
for j in (0..image_height).rev() { // Iterate from top to bottom (PPM stores from top to bottom)
for i in 0..image_width { // Iterate from left to right
// Calculate normalized device coordinates (u, v) for the current pixel
let u = i as f64 / (image_width - 1) as f64;
let v = j as f64 / (image_height - 1) as f64;
// Calculate the ray direction from camera origin through the pixel on the viewport
let ray_direction = lower_left_corner + horizontal * u + vertical * v - origin;
let ray = Ray::new(origin, ray_direction);
let pixel_color = scene.trace_ray(&ray);
// Convert normalized color (0.0-1.0) to 0-255 for PPM output
let ir = (255.999 * pixel_color.x) as u8;
let ig = (255.999 * pixel_color.y) as u8;
let ib = (255.999 * pixel_color.z) as u8;
writeln!(file, "{} {} {}", ir, ig, ib)?;
}
}
Ok(())
}
fn main() {
println!("Rendering image...");
match render() {
Ok(_) => println!("Image 'output.ppm' rendered successfully! Open it with an image viewer."),
Err(e) => eprintln!("Error rendering image: {}", e),
}
}
```
To run this Rust code:
1. Save the code as `main.rs`.
2. Open your terminal or command prompt.
3. Navigate to the directory where you saved `main.rs`.
4. Compile the code: `rustc main.rs`
5. Run the executable: `.\main.exe` (on Windows) or `./main` (on Linux/macOS).
This will generate a file named `output.ppm` in the same directory. You can open this `.ppm` file with most image viewers (e.g., GIMP, Photoshop, IrfanView, or even some web browsers) to see the rendered image.








Ray Tracer