Technical Solution for Everyday Science Visualization

A. 360 Panorama Player

The goal of this project was to develop a panorama player that could display 360-degree panoramic images, providing an immersive viewing experience. The chosen method for displaying the panorama was through a spherical projection.

The implementation process involved several steps:

  1. Creating a Sphere: A sphere was created by dividing its surface into numerous small triangles. By sufficiently refining the division, the sphere would appear smooth. The sphere was constructed using a combination of longitudinal and latitudinal lines, forming rectangles that could be treated as two triangles. The vertices were generated using a nested loop iterating over the latitudinal and longitudinal angles.

  2. Determining Vertex and Texture Coordinates: The coordinates for the vertices on the sphere's surface were determined using mathematical formulas, taking into account the radius and the angles.

  3. Texture Mapping: Texture coordinates (u, v) were used to map the texture onto the sphere's surface. This process, known as UV mapping, involved mapping the vertex coordinates to the corresponding texture coordinates to achieve the desired texture loading.

  4. MVP Matrix and Coordinate Transformations: To achieve the immersive 360-degree effect, various matrix transformations were applied to the display coordinates. The Model-View-Projection (MVP) matrix, composed of the ModelMatrix, ViewMatrix, and ProjectionMatrix, was used to project the scene onto the screen. Additionally, clipping was considered to limit the visible area to ensure proper display, although it was not implemented in this project.

  5. User Interaction: The sphere could be rotated based on user input, providing an interactive viewing experience. In this implementation, the sphere rotation was controlled by mouse movements. However, the same concept could be applied to sensor-based rotation by obtaining rotation data. The ViewMatrix was updated based on user touch events to update the scene's position and achieve the desired rotation effect.

B. Implementation of 3D Model Annotation

The objective of this project was to enable annotation of specific points on a 3D model by mouse clicks, allowing the addition of one or more annotations in the scene. The annotations could be dragged and transformed, maintaining their position relative to the model's animations, translations, and rotations.

The implementation process involved the following steps:

  1. Raycaster for Annotation Coordinate Selection: The mouse click position, initially in screen coordinates, needed to be transformed into 3D space coordinates. A ray was cast from the camera position through the mouse position, and the intersection point with the model's surface was used as the annotation point in 3D space.

  2. Interactive Interaction with WebGL: To add annotations to the scene, a separate shader was created for rendering the annotations independently of the scene. Each annotation was represented by a hotspot, consisting of a mesh and a material. Data information required for rendering, such as position, UV coordinates, color, and index, was passed to the shader. Each hotspot was associated with a hotspot object, binding it to its specific mesh and material.

  3. Reusing Mesh and Material: Considering the simplicity of hotspot information, multiple hotspots could share the same mesh and material. By consolidating the meshes and materials of multiple hotspots, the rendering process was optimized, reducing it from multiple iterations to a single operation and improving performance.

  4. Vertex Stream Compression: To save space and optimize data transmission to the shader, compression techniques were employed to reduce the redundancy in hotspot data.

  5. Hotspot Occlusion: Proper occlusion handling between hotspots and between hotspots and other objects in the scene was considered to ensure correct rendering.

  6. Mouse Drag Transformation of Hotspot Positions: When dragging a hotspot, the mouse position needed to be continuously tracked and converted into 3D space coordinates