Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hw1 Submission #7

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
146 changes: 30 additions & 116 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,48 +3,13 @@ CIS565: Project 1: CUDA Raytracer
-------------------------------------------------------------------------------
Fall 2012
-------------------------------------------------------------------------------
Due Tuesday, 09/25/2012
Kong Ma
-------------------------------------------------------------------------------

-------------------------------------------------------------------------------
NOTE:
Implemented Feature
-------------------------------------------------------------------------------
This project requires an NVIDIA graphics card with CUDA capability! Any card after the Geforce 8xxx series will work. If you do not have an NVIDIA graphics card in the machine you are working on, feel free to use any machine in the SIG Lab or in Moore100 labs. All machines in the SIG Lab and Moore100 are equipped with CUDA capable NVIDIA graphics cards. If this too proves to be a problem, please contact Patrick or Karl as soon as possible.

-------------------------------------------------------------------------------
INTRODUCTION:
-------------------------------------------------------------------------------
In this project, you will implement a CUDA based raytracer capable of generating raytraced rendered images extremely quickly. For those of you who have taken CIS460/560, building a raytracer should not be anything new to you from a conceptual point of you. For those of you that have not taken CIS460/560, raytracing is a technique for generating images by tracing rays of light through pixels in an image plane out into a scene and following the way the rays of light bounce and interact with objects in the scene. More information can be found here: http://en.wikipedia.org/wiki/Ray_tracing_(graphics).

The ultimate purpose of this project is to serve as the foundation for your next project: a full CUDA based global illumination pathtracer. Raytracing can be thought of as a way to generate an isolated version of the direct light contribution in a global illumination scenario.

Since in this class we are concerned with working in generating actual images and less so with mundane tasks like file I/O, this project includes basecode for loading a scene description file format, described below, and various other things that generally make up the render "harness" that takes care of everything up to the rendering itself. The core renderer is left for you to implement.
Finally, note that while this basecode is meant to serve as a strong starting point for a CUDA raytracer, you are not required to use this basecode if you wish, and you may also change any part of the basecode specification as you please, so long as the final rendered result is correct.

-------------------------------------------------------------------------------
CONTENTS:
-------------------------------------------------------------------------------
The Project1 root directory contains the following subdirectories:

* src/ contains the source code for the project. Both the Windows Visual Studio solution and the OSX makefile reference this folder for all source; the base source code compiles on OSX and Windows without modification.
* scenes/ contains an example scene description file.
* renders/ contains an example render of the given example scene file.
* PROJ1_WIN/ contains a Windows Visual Studio 2010 project and all dependencies needed for building and running on Windows 7.
* PROJ1_OSX/ contains a OSX makefile, run script, and all dependencies needed for building and running on Mac OSX 10.8.

The Windows and OSX versions of the project build and run exactly the same way as in Project0.

-------------------------------------------------------------------------------
REQUIREMENTS:
-------------------------------------------------------------------------------
In this project, you are given code for:

* Loading, reading, and storing the TAKUAscene scene description format
* Example functions that can run on both the CPU and GPU for generating random numbers, spherical intersection testing, and surface point sampling on cubes
* A class for handling image operations and saving images
* Working code for CUDA-GL interop

You will need to implement the following features:
Finished all the basic features:

* Raycasting from a camera into a scene through a pixel grid
* Phong lighting for one point light source
Expand All @@ -53,96 +18,45 @@ You will need to implement the following features:
* Cube intersection testing
* Sphere surface point sampling

You are also required to implement at least 2 of the following features:

Finished following optional features
* Specular reflection
* Soft shadows and area lights
* Texture mapping
* Bump mapping
* Depth of field
* Supersampled antialiasing
* Refraction, i.e. glass
* OBJ Mesh loading and renderin
* Interactive camera
* Supersampled antialiasing ------use jittering rather than supersampling for efficiency
* Refraction, i.e. glass ------refraction is not robust


-------------------------------------------------------------------------------
BASE CODE TOUR:
Features Analysis
-------------------------------------------------------------------------------
You will be working in three files: raytraceKernel.cu, intersections.h, and interactions.h. Within these files, areas that you need to complete are marked with a TODO comment. Areas that are useful to and serve as hints for optional features are marked with TODO (Optional). Functions that are useful for reference are marked with the comment LOOK.
1. Complete Basic ray tracer
* Implemented basic ray tracer based on local illumination equation to deal with Lambert surface and specular highlight.
* Todo: square specular highlight.
2. Soft shadows
* Soft shadows are achieved by tracing multiple light rays from the intersection point to the light source, rather than one light ray. To do this, I choose the light ray starts from a random point on area light and ends at the intersection point.
* However, due to the limitation of the launch time of Kernel function on GPU, in this project, I can only achieve at most 25 random rays in one illumination test when enable reflection rays. Therefore, it takes more iterations for the image to converge.
3. Anti-aliasing
* To deal with the aliasing, we can use super-sampling and jittering method. In this project I am only using the jittering method because the result image we got after one iteration is the average result of previous iterations. With jittering we can achieve a simple effects of super-sampling.
* Todo: Although jittering can give us relative good result, however if there is a refraction object in the scene, the direction of light is deviate to a large distance, the aliasing problem is exaggerated. Jittering cannot solve the problem;
4. Depth of field
* In this project, I am using the method introduced by Cook et al. in 1984 in Distributed Ray Tracing. The result is generally good, but it takes more iteration and longer time to converge.
* Maybe a post processing method can be used to achieve real-time result.
5. Reflection and Refraction
* The calculations of reflectance and transmittance are based on the Fresnel equation with the assumption that light is unpolarised (containing an equal mix of s- and p-polarisations).
* However, a big problem is that, I didn't take the deviation effect of refraction surface into consideration when sampling whether a point can be illuminated by the light source. I think a different approach could be used to do this illumination test, which could solve the refraction deviation effects and square specular highlight at same time.

* raytraceKernel.cu contains the core raytracing CUDA kernel. You will need to complete:
* cudaRaytraceCore() handles kernel launches and memory management; this function already contains example code for launching kernels, transferring geometry and cameras from the host to the device, and transferring image buffers from the host to the device and back. You will have to complete this function to support passing materials and lights to CUDA.
* raycastFromCameraKernel() is a function that you need to implement. This function once correctly implemented should handle camera raycasting.
* raytraceRay() is the core raytracing CUDA kernel; all of your raytracing logic should be implemented in this CUDA kernel. raytraceRay() should take in a camera, image buffer, geometry, materials, and lights, and should trace a ray through the scene and write the resultant color to a pixel in the image buffer.

* intersections.h contains functions for geometry intersection testing and point generation. You will need to complete:
* boxIntersectionTest(), which takes in a box and a ray and performs an intersection test. This function should work in the same way as sphereIntersectionTest().
* getRandomPointOnSphere(), which takes in a sphere and returns a random point on the surface of the sphere with an even probability distribution. This function should work in the same way as getRandomPointOnCube().

* interactions.h contains functions for ray-object interactions that define how rays behave upon hitting materials and objects. You will need to complete:
* getRandomDirectionInSphere(), which generates a random direction in a sphere with a uniform probability. This function works in a fashion similar to that of calculateRandomDirectionInHemisphere(), which generates a random cosine-weighted direction in a hemisphere.
* calculateBSDF(), which takes in an incoming ray, normal, material, and other information, and returns an outgoing ray. You can either implement this function for ray-surface interactions, or you can replace it with your own function(s).

You will also want to familiarize yourself with:

* sceneStructs.h, which contains definitions for how geometry, materials, lights, cameras, and animation frames are stored in the renderer.
* utilities.h, which serves as a kitchen-sink of useful functions

-------------------------------------------------------------------------------
NOTES ON GLM:
-------------------------------------------------------------------------------
This project uses GLM, the GL Math library, for linear algebra. You need to know two important points on how GLM is used in this project:

* In this project, indices in GLM vectors (such as vec3, vec4), are accessed via swizzling. So, instead of v[0], v.x is used, and instead of v[1], v.y is used, and so on and so forth.
* GLM Matrix operations work fine on NVIDIA Fermi cards and later, but pre-Fermi cards do not play nice with GLM matrices. As such, in this project, GLM matrices are replaced with a custom matrix struct, called a cudaMat4, found in cudaMat4.h. A custom function for multiplying glm::vec4s and cudaMat4s is provided as multiplyMV() in intersections.h.

BLOG LINK:
-------------------------------------------------------------------------------
TAKUAscene FORMAT:
-------------------------------------------------------------------------------
This project uses a custom scene description format, called TAKUAscene. TAKUAscene files are flat text files that describe all geometry, materials, lights, cameras, render settings, and animation frames inside of the scene. Items in the format are delimited by new lines, and comments can be added at the end of each line preceded with a double-slash.

Materials are defined in the following fashion:

* MATERIAL (material ID) //material header
* RGB (float r) (float g) (float b) //diffuse color
* SPECX (float specx) //specular exponent
* SPECRGB (float r) (float g) (float b) //specular color
* REFL (bool refl) //reflectivity flag, 0 for no, 1 for yes
* REFR (bool refr) //refractivity flag, 0 for no, 1 for yes
* REFRIOR (float ior) //index of refraction for Fresnel effects
* SCATTER (float scatter) //scatter flag, 0 for no, 1 for yes
* ABSCOEFF (float r) (float b) (float g) //absorption coefficient for scattering
* RSCTCOEFF (float rsctcoeff) //reduced scattering coefficient
* EMITTANCE (float emittance) //the emittance of the material. Anything >0 makes the material a light source.

Cameras are defined in the following fashion:

* CAMERA //camera header
* RES (float x) (float y) //resolution
* FOVY (float fovy) //vertical field of view half-angle. the horizonal angle is calculated from this and the reslution
* ITERATIONS (float interations) //how many iterations to refine the image, only relevant for supersampled antialiasing, depth of field, area lights, and other distributed raytracing applications
* FILE (string filename) //file to output render to upon completion
* frame (frame number) //start of a frame
* EYE (float x) (float y) (float z) //camera's position in worldspace
* VIEW (float x) (float y) (float z) //camera's view direction
* UP (float x) (float y) (float z) //camera's up vector

Objects are defined in the following fashion:
* OBJECT (object ID) //object header
* (cube OR sphere OR mesh) //type of object, can be either "cube", "sphere", or "mesh". Note that cubes and spheres are unit sized and centered at the origin.
* material (material ID) //material to assign this object
* frame (frame number) //start of a frame
* TRANS (float transx) (float transy) (float transz) //translation
* ROTAT (float rotationx) (float rotationy) (float rotationz) //rotation
* SCALE (float scalex) (float scaley) (float scalez) //scale

An example TAKUAscene file setting up two frames inside of a Cornell Box can be found in the scenes/ directory.
http://gpuprojects.blogspot.com/

-------------------------------------------------------------------------------
SUBMISSION
Instruction on building and running
-------------------------------------------------------------------------------
As with the previous project, you should fork this project and work inside of your fork. Upon completion, commit your finished project back to your fork. DO NOT make a pull request to merge back to the master version.
You should include a README file detailing what features you implemented, any difficulties you had, and so on and so forth.
Default Setting is softshadow result without depth of field effect.

1. if your machine gives warnings such as "Kernel failed! unknown error!" or "Kernel failed! the launch timed out and was terminated". It is possible that the soft shadow takes too long to compute in your machine, you can decrease the value in "__constant__ int rayNumbers=10;".

2. if you don't want the soft shadow effect,you can make the value in "__constant__ int rayNumbers=10;" to 1, and change the value "__constant__ bool softShadow=true;" to false

3. if you want to see depth of field result, uncomment the line "//#define DEPTHOFFIELD"
Binary file added refraction.PNG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading