Tuesday, 6 October 2020

All about Ignou mcs-031/ Design and Analysis of Algorithm important guess solved paper June 2020 upcoming EXAM

1.Differentiate between types of Asymptotic notations

Types of Asymptotic Notations

We use three types of asymptotic notations to represent the growth of any algorithm, as input increases:

  1. Big Theta (Θ)
  2. Big Oh(O)
  3. Big Omega (Ω)

  Tight Bounds: Theta

When we say tight bounds, we mean that the time complexity represented by the Big-Θ notation is like the average value or range within which the actual time of execution of the algorithm will be.

For example, if for some algorithm the time complexity is represented by the expression 3n2 + 5n, and we use the Big-Θ notation to represent this, then the time complexity would be Θ(n2), ignoring the constant coefficient and removing the insignificant part, which is 5n.

Here, in the example above, complexity of Θ(n2) means, that the avaerage time for any input n will remain in between, k1 * n2 and k2 * n2, where k1, k2 are two constants, therby tightly binding the expression representing the growth of the algorithm.





Upper Bounds: Big-O

This notation is known as the upper bound of the algorithm, or a Worst Case of an algorithm.

It tells us that a certain function will never exceed a specified time for any value of input n.

The question is why we need this representation when we already have the big-Θ notation, which represents the tightly bound running time for any algorithm. Let's take a small example to understand this.

Consider Linear Search algorithm, in which we traverse an array elements, one by one to search a given number.

In Worst case, starting from the front of the array, we find the element or number we are searching for at the end, which will lead to a time complexity of n, where n represents the number of total elements.

But it can happen, that the element that we are searching for is the first element of the array, in which case the time complexity will be 1.

Now in this case, saying that the big-Θ or tight bound time complexity for Linear search is Θ(n), will mean that the time required will always be related to n, as this is the right way to represent the average time complexity, but when we use the big-O notation, we mean to say that the time complexity is O(n), which means that the time complexity will never exceed n, defining the upper bound, hence saying that it can be less than or equal to n, which is the correct representation.

This is the reason, most of the time you will see Big-O notation being used to represent the time complexity of any algorithm, because it makes more sense.


Lower Bounds: Omega

Big Omega notation is used to define the lower bound of any algorithm or we can say the best case of any algorithm.

This always indicates the minimum time required for any algorithm for all input values, therefore the best case of any algorithm.

In simple words, when we represent a time complexity for any algorithm in the form of big-Ω, we mean that the algorithm will take at least this much time to complete it's execution. It can definitely take more time than this too.

Wednesday, 11 December 2019

All about Ignou mcs053/ Computer Graphics important guess solved paper June 2020 upcoming EXAM


1.Midpoint circle generation algorithm

     This is the algorithm which is used to calculate the entire perimeter points of a circle in a first octant so that the points of the other octants can be taken easily as they are mirror points, this is due to a circle property as it is symmetric about its center.

The algorithm is very similar to line generation algorithm. Here, only the boundary condition is different.
Diagram of midpoint circle

In this algorithm decision parameter is based on a circle equation. As we know that the circle equation is x2 + y2=r2    when the center is (0,0).
Now let us define the function of a circle i.e.:fcircle(x+y)= x2 + y2-r2
(1). If fcircle < 0 then x, y is inside circle boundary.
(2). If fcircle > 0 then x, y is outside circle boundary.
(3). If fcircle = 0 the x, y is on the circle boundary.
Boundary Condition: Whether the mid-point lies inside or outside the circle can be decided by using the above formula.
For any given pixel (x, y), the next pixel to be plotted is either(x, y+1) or (x-1, y+1). This can be decided by following the below steps.

1.Find the mid-point p of the two possible pixels i.e. (x-0.5, y+1)
2.If p lies inside or on the circle perimeter, we plot the pixel (x, y+1), otherwise if its outside we plot pixel(x-1, y+1)

In our program we denote fcircle with P. The value of P is calculated at the mid-point of the two contending pixels i.e.(x-0.5, y+1).
Each pixel is described with the subscript k.
Pk= (Xk - 0.5)2 +(yk +1)2 – r2
Now,
xk+1 =xk or xk-1, yk+1=yk+1
Pk+1= (xk+1 – 0.5)2 + (yk+1 +1)2 -r2
=( xk+1-0.5)2 +[(yk+1) + 1]2 -r2
=( xk+1-0.5)2 +(yk+1)2 +2(yk + 1) +1 -r2
=( xk+1 -0.5)2 + [-(xk - 0.5)2 +(xk -0.5)2] + (yk+1)2-r2 +( yk+1)+1
=Pk +(xk+1 -0.5)2- (xk-0.5)2+2(yk+1)+1
=Pk+ 2(yk+1)+1, when Pk <=0 i.e. the midpoints is inside the circle
(xk+1=xk)
Pk+ 2(yk+1) -2(xk-1)+1, when Pk>0 i.e. the mid point is outside the circle(xk+1=xk-1)

The first point to be plotted is(r,0) on the x-axis. The initial value of P is calculated as follows:-
P1 =(r-0.5)2+(0+1)2-r2
=1.25-r
1-r(when rounded off)

Examples:
Input: Centre ->(0,0), Radius ->3
Output: (3,0) (3,0) (0,3) (0,3)
                (3,1) (-3,1) (3,-1) (-3,-1)
                (1,3) (-1,3) (1,-3) (-1,-3)
                (2,2) (-2,2) (2,-2) (-2,-2)
K
PK
XK
YK
PK+1
XK+1
YK+1
Output
1
-2
3
0
-1
3
1
(3,1) (-3,1) (3,-1) (-3,-1)
(1,3) (-1,3) (1,-3) (-1,-3)
2
-1
3
1
2
2
2
(2,2)(-2,2)(2,-2)(-2,-2)
3
2
2
2



Break from loop

Input: Center ->(4,4), Radius ->2
Output:(6,4)(6,4)(4,6)(4,6)
               (6,5)(2,5)(6,3)(2,3)
               (5,6)(3,6)(5,2)(3,2)

ALGORITHM
  • In this the input radius r is there with a centre (xc, yc). To obtain the first point m the circumference of a circle is centered on the origin as (x0, y0)= (0,r).
  • Calculate the initial decision parameters which are: P0=5/4-r or 1-r
  • Now at each xk position starting k=0, perform the following task.
If Pk<0 then plotting point will be (xk+1, yk) and Pk+1=Pk+2(xk+1)+1
Else the next point along the circle is (xk+1, yk-1) and 
Pk+1=Pk+2(xk+1)+1-2( yk+1)
  • Determine the symmetry points in the other quadrants.
  • Now move at each points by the given centre that is:
x=x+xc
y=y+yc
     6. At last repeat steps from 3 to 5 until the condition x>=y
2. Prove that the composition of two successive rotations are additive i.e. R(Ɵ1). R(Ɵ2) = R(Ɵ1+Ɵ2).
Solution :- The Rotation matrix R is given as,
By multiplying two rotation matrices ,we can verify that two successive rotations are additive.
3.What are the various types of graphic image formats? What do you mean by gray scale image?
Definition: Graphic images are stored digitally using a small number of standardized graphic file formats, including bit map, TIFF, JPEG, GIF, PNG; they can also be stored as raw, unprocessed data.
There are likely billions of graphic images available on the World Wide Web, and with few exceptions, almost any user can view any of them with no difficulty. This is because all those images are stored in what amounts to a handful of file formats. Before discussing the principal graphics file formats, however, we need to review the two fundamental types of graphics: raster and vector.
A raster image is like a photo in your newspaper. Look closely and you’ll see it’s made up of equally spaced round dots in several distinct colors. But if you look at an ad featuring a line drawing or, better yet, a banner headline, you won’t see an interrupted line of dots but a solid image bounded by smooth curves. Those are vector graphics. Many graphics are created as vector graphics and then published as raster images
Gray scale image:
A grayscale (or graylevel) image is simply one in which the only colors are shades of gray. The reason for differentiating such images from any other sort of color image is that less information needs to be provided for each pixel. Infact a `gray' color is one in which the red, green and blue components all have equal intensity in RGB space, and so it is only necessary to specify a single intensity value for each pixel, as opposed to the three intensities needed to specify each pixel in a full color image.
Often, the grayscale intensity is stored as an 8-bit integer giving 256 possible different shades of gray from black to white. If the levels are evenly spaced then the difference between successive graylevels is significantly better than the graylevel resolving power of the human eye.
Grayscale images are very common & entirely sufficient for many tasks such as face detection and so there is no need to use more complicated and harder-to-process color images. 
4.Discuss any two audio file formats that are used in multimedia.
An audio file format is a file format for storing digital audio data on a computer system. The bit layout of the audio data (excluding metadata) is called the audio coding format and can be uncompressed, or compressed to reduce the file size, often using lossy compression. The data can be a raw bitstream in an audio coding format, but it is usually embedded in a container format or an audio data format with defined storage layer.
Format types:-
There are three major groups of audio file formats:

Uncompressed audio format
One major uncompressed audio format, LPCM, is the same variety of PCM as used in Compact Disc Digital Audio and is the format most commonly accepted by low level audio APIs and D/A converter hardware. Although LPCM can be stored on a computer as a raw audio format, it is usually stored in a .wav file on Windows or in a .aiff file on macOS. The AIFF format is based on the Interchange File Format (IFF), and the WAV format is based on the similar Resource Interchange File Format (RIFF). WAV and AIFF are designed to store a wide variety of audio formats, lossless and lossy; they just add a small, metadata-containing header before the audio data to declare the format of the audio data, such as LPCM with a particular sample ratebit depthendianness and number of channels. Since WAV and AIFF are widely supported and can store LPCM, they are suitable file formats for storing and archiving an original recording.
BWF (Broadcast Wave Format) is a standard audio format created by the European Broadcasting Union as a successor to WAV. Among other enhancements, BWF allows more robust metadata to be stored in the file. See European Broadcasting Union: Specification of the Broadcast Wave Format (EBU Technical document 3285, July 1997). This is the primary recording format used in many professional audio workstations in the television and film industry. BWF files include a standardized timestamp reference which allows for easy synchronization with a separate picture element. Stand-alone, file based, multi-track recorders from AETA, Sound Devices, Zaxcom, HHB Communications Ltd, Fostex, Nagra, Aaton, and TASCAM all use BWF as their preferred format.

Lossless compressed audio format
A lossless compressed format stores data in less space without losing any information. The original, uncompressed data can be recreated from the compressed version.
Uncompressed audio formats encode both sound and silence with the same number of bits per unit of time. Encoding an uncompressed minute of absolute silence produces a file of the same size as encoding an uncompressed minute of music. In a lossless compressed format, however, the music would occupy a smaller file than an uncompressed format and the silence would take up almost no space at all.
Lossless compression formats include the common[6] FLAC, WavPack, Monkey's AudioALAC (Apple Lossless). They provide a compression ratio of about 2:1 (i.e. their files take up half the space of PCM). Development in lossless compression formats aims to reduce processing time while maintaining a good compression ratio.
Lossy compressed audio format
Lossy compression enables even greater reductions in file size by removing some of the audio information and simplifying the data. This, of course, results in a reduction in audio quality, but a variety of techniques are used, mainly by exploiting psychoacoustics, to remove the parts of the sound that have the least effect on perceived quality, and to minimize the amount of audible noise added during the process. The popular MP3 format is probably the best-known example, but the AAC format found on the iTunes Music Store is also common. Most formats offer a range of degrees of compression, generally measured in bit rate. The lower the rate, the smaller the file and the more significant the quality loss.

5.What is animation?
Animation is the process of designing, drawing, making layouts and preparation of photographic sequences which are integrated in the multimedia and gaming products. Animation involves the exploitation and management of still images to generate the illusion of movement. A person who creates animations is called animator. He / she use various computer technologies to capture the still images and then to animate these in desired sequence.

Multimedia is the term used to represent combination of visual and audio materials gathered from various resources and then added into one single combination. A multimedia product can be sets of texts, graphic arts, sounds, animations and videos. Precisely, term multimedia is used to refer visual and audio materials into a single common presentation which can be played in a computer including CD ROM or digital video, internet or web technology, streaming audio or video and data projection system etc.

Modern entertainment industry i.e. film and television has gained new heights because of advances in animation, graphics and multimedia. Television advertisements, cartoons serials, presentation and model designs - all use animation and multimedia techniques.
Types of Animation
  • Traditional animation (cel animation or hand-drawn animation)
  • Stop motion animation (Claymation, Cut-outs)
  • Motion Graphics (Typography, Animated logo)
  • Computer animation
  •   2D animation
  •   3D animation
For instance,
The bouncing ball animation (below) consists of these six frames.
This animation moves at 10 frames per second.
Animation is a method in which pictures are manipulated to appear as moving images. In traditional animation, images are drawn or painted by hand

6.Discuss the advantages of gouraud shading scheme over constant shading scheme.
Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness.
This Intensity-Interpolation scheme, developed by Gouraud and usually referred to as Gouraud Shading, renders a polygon surface by linear interpolating intensity value across the surface. Intensity values for each polygon are coordinate with the value of adjacent polygons along the common edges, thus eliminating the intensity discontinuities that can occur in flat shading.
Each polygon surface is rendered with Gouraud Shading by performing the following calculations:
  • Determining the average unit normal vector at each polygon vertex.
  • Apply an illumination model to each vertex to determine the vertex intensity.
  • Linear interpolate the vertex intensities over the surface of the polygon.
At each polygon vertex, we obtain a normal vector by averaging the surface normals of all polygons staring that vertex as shown in fig:
Thus, for any vertex position V, we acquire the unit vertex normal with the calculation
Once we have the vertex normals, we can determine the intensity at the vertices from a lighting model.
Following figures demonstrate the next step: Interpolating intensities along the polygon edges. For each scan line, the intensities at the intersection of the scan line with a polygon edge are linearly interpolated from the intensities at the edge endpoints. For example: In fig, the polygon edge with endpoint vertices at position 1 and 2 is intersected by the scanline at point 4. A fast method for obtaining the intensities at point 4 is to interpolate between intensities I1 and I2 using only the vertical displacement of the scan line.

Similarly, the intensity at the right intersection of this scan line (point 5) is interpolated from the intensity values at vertices 2 and 3. Once these bounding intensities are established for a scan line, an interior point (such as point P in the previous fig) is interpolated from the bouending intensities at point 4 and 5 as
Incremental calculations are used to obtain successive edge intensity values between scan lines and to obtain successive intensities along a scan line as shown in fig:
If the intensity at edge position (x, y) is interpolated as
Then we can obtain the intensity along this edge for the next scan line, Y-1 as
Similar calculations are used to obtain intensities at successive horizontal pixel positions along each scan line.
When surfaces are to be rendered in color, the intensities of each color component is calculated at the vertices. Gouraud Shading can be connected with a hidden-surface algorithm to fill in the visible polygons along each scan-line. An example of an object-shaded with the Gouraud method appears in the following figure:
Gouraud Shading discards the intensity discontinuities associated with the constant-shading model, but it has some other deficiencies. Highlights on the surface are sometimes displayed with anomalous shapes, and the linear intensity interpolation can cause bright or dark intensity streaks, called Match bands, to appear on the surface. These effects can be decreased by dividing the surface into a higher number of polygon faces or by using other methods, such as Phong shading, that requires more calculations.
7.Explain the Z buffer visible surface detection method.
When viewing a picture containing non transparent objects and surfaces, it is not possible to see those objects from view which are behind from the objects closer to eye. To get the realistic screen image, removal of these hidden surfaces is must. The identification and removal of these surfaces is called as the Hidden-surface problem.
Z-buffer, which is also known as the Depth-buffer method is one of the commonly used method for hidden surface detection. It is an Image space method. Image space methods are based on the pixel to be drawn on 2D. For these methods, the running time complexity is the number of pixels times number of objects. And the space complexity is two times the number of pixels because two arrays of pixels are required, one for frame buffer and the other for the depth buffer.
The Z-buffer method compares surface depths at each pixel position on the projection plane. Normally z-axis is represented as the depth. The algorithm for the Z-buffer method is given below :


Algorithm :
First of all, initialize the depth of each pixel.
i.e,  d(i, j) = infinite (max length)
Initialize the color value for each pixel 
as c(i, j) = background color
for each polygon, do the following steps :

for (each pixel in polygon's projection)
{
    find depth i.e, z of polygon
    at (x, y) corresponding to pixel (i, j)
    
    if (z < d(i, j))
    {
        d(i, j) = z;
        c(i, j) = color;
    }
}
Let’s consider an example to understand the algorithm in a better way. Assume the polygon given is as below :
In starting, assume that the depth of each pixel is infinite.
As the z value i.e, the depth value at every place in the given polygon is 3, on applying the algorithm, the result is:
Now, let’s change the z values. In the figure given below, the z values goes from 0 to 3.
In starting, the depth of each pixel will be infinite as :
Now, the z values generated on the pixel will be different which are as shown below :
Therefore, in the Z buffer method, each surface is processed separately one position at a time across the surface. After that the depth values i.e, the z values for a pixel are compared and the closest i.e, (smallest z) surface determines the color to be displayed in frame buffer. The z values, i.e, the depth values are usually normalized to the range [0, 1]. When the z = 0, it is known as Back Clipping Pane and when z = 1, it is called as the Front Clipping Pane.
In this method, 2 buffers are used :
  • Frame buffer
  • Depth buffer
Calculation of depth :
As we know that the equation of the plane is :

ax + by + cz + d = 0, this implies 

z = -(ax + by + d)/c, c!=0
Calculation of each depth could be very expensive, but the computation can be reduced to a single add per pixel by using an increment method as shown in figure below :
Let’s denote the depth at point A as Z and at point B as Z’. Therefore :
AX + BY + CZ + D = 0 implies

Z = (-AX - BY - D)/C  ------------(1)

Similarly, Z' = (-A(X + 1) - BY -D)/C   ----------(2)

Hence from (1) and (2), we conclude :

Z' = Z - A/C  ------------(3)
Hence, calculation of depth can be done by recording the plane equation of each polygon in the (normalized) viewing coordinate system and then using the incremental method to find the depth Z.
So, to summarize, it can be said that this approach compares surface depths at each pixel position on the projection plane. Object depth is usually measured from the view plane along the z-axis of a viewing system.
Example :
Let S1, S2, S3 are the surfaces. The surface closest to projection plane is called visible surface. The computer would start (arbitrarily) with surface 1 and put it’s value into the buffer. It would do the same for the next surface. It would then check each overlapping pixel and check to see which one is closer to the viewer and then display the appropriate color. As at view-plane position (x, y), surface S1 has the smallest depth from the view plane, so it is visible at that position.
Points to remember :

1) Z buffer method does not require pre-sorting of polygons.
2) This method can be executed quickly even with many polygons.
3) This can be implemented in hardware to overcome the speed problem.
4) No object to object comparison is required.
5) This method can be applied to non-polygonal objects.
6) Hardware implementation of this algorithm are available in some graphics workstations.
7) The method is simple to use and does not require additional data structure.
8) The z-value of a polygon can be calculated incrementally.
9) Cannot be applied on transparent surfaces i.e, it only deals with opaque surfaces. For ex :

10) If only a few objects in the scene are to be rendered, then this method is less attractive because of additional buffer and the overhead involved in updating the buffer.
11) Wastage of time may occur because of drawing of hidden objects

8.Define Bezier curves.
A curve in a drawing program that is defined mathematically, and whose shape can be altered by dragging either of its two interior determining points with a mouse.
(mathematics)
A simple smooth curve whose shape is determined by a mathematical formula from the locations of four points, the two end points of the curve and two interior points.
Bezier curve

A type of curve defined by mathematical formulae, used in Computer Graphics. A curve with coordinates P(u), where u varies from 0 at one end of the curve to 1 at the other, is defined by a set of n+1 "control points" (X(i), Y(i), Z(i)) for i = 0 to n.

P(u) = Sum i=0..n [(X(i), Y(i), Z(i)) * B(i, n, u)]

B(i, n, u) = C(n, i) * u^i * (1-u)^(n-i)

C(n, i) = n!/i!/(n-i)!

A Bezier curve (or surface) is defined by its control points, which makes it invariant under any affine mapping (translation, rotation, parallel projection), and thus even under a change in the axis system. You need only to transform the control points and then compute the new curve. The control polygon defined by the points is itself affine invariant.

Bezier curves also have the variation-diminishing property. This makes them easier to split compared to other types of curve such as Hermite or B-splin.

Other important properties are multiple values, global and local control, versatility, and order of continuity.
Cubic Bézier curve with four control points
The basis functions on the range t in [0,1] for cubic Bézier curves: blue: y = (1 − t)3, green: y= 3(1 − t)2 t, red: y= 3(1 − tt2, and cyan: y = t3
9.Explain about Analog and Digital Sound.
The debate over whether analog or digital sound is preferable is very delicate, about as resolvable as arguing about which is the better sport. There are people who swear by either side. On one side there are the high-end audio enthusiasts who believe analog sound is superior because it captures the true essence of the sound wave, while on the other side, there are those who are convinced that advances in audio recording will produce digital sound equally as pure. The final sound quality ultimately depends on the quality of both the recording and the sound system.

Analog sound
Sound itself is a continuous wave; it is an analog signal. This means that one cannot detect the precise moment the pitch changes. Capturing this continuous wave in its entirety requires an analog recording system; what the microphone receives is exactly what's written onto the vinyl disk or cassette. Analog is believed to be the true representation of the sound at the moment it was recorded.

Digital sound
Digital sound is not a recording of the actual sound, but rather a combination of binary code, the utmost simplest machine language of zeros and ones, representing the sound's intensity and pitch at precise intervals with relative accuracy. The binary code is arranged in a specific pattern informing the computer how to recreate the sound itself. It is not a single wave the way analog sound is, but rather a composite of multiple segments representing consecutive moments of intensity and pitch. Where an analog recording is similar to the fluency of film, a digital recording is stop motion photography. 
Digital sound is missing bits of the sound wave, but as digital recording improves, the curve will smoothen out and begin to resemble the analog sound wave. Image courtesy of Center point audio.

The recording method plays a big role
Sound waves can be stored on an analog master record and then transformed to a digital audio format using an analog-to-digital converter (ADC), or it can be recorded directly on a digital medium. Once the audio is played the digital signal is converted back into an analog sound wave using a digital-to-analog converter (DAC). Having suffered compression and then expansion, the integrity of the original sound is compromised.

Assuming you've invested in a high-quality sound system that's compatible with both the digital and audio formats, the superior choice depends upon which one was used to make the initial recording. The goal is to select a format that undergoes the fewest amounts of conversions, meaning if an album was recorded as analog, the sound wouldn't need to convert to digital before being converted back to analog since it can be played directly on the analog system.
10.What are Authoring tools? Give the characteristics of the same.
Software that allows (usually non-programmer) users to create their own courseware, web page, or multimedia applications and the associated navigating tools.
Web authoring is the practice of creating web documents using modern web authoring software and tools. Web authoring software is a type of desktop publishing tool that allows users to navigate the tricky environment of HTML and web coding by offering a different kind of graphical user interface.
Example:  Web Authoring
With web authoring tools, the end user can see a visual result that is a lot like the final project after it is built. Web authoring tools are similar to HTML editors in that they typically allow toggling between an HTML code view and a visual design. Some of these types of tools are also called WYSIWYG ("what you see is what you get") editors because, again, they allow displaying something that looks like the final project as the user is building it. The alternative is to hand code a project, which can be frustrating, confusing and daunting for less experienced designers. There are many different tools available for web authoring that help translate HTML coding for those who do not have as much experience with web code syntax.
The main features of authoring software are:

• Integrated Multimedia Elements

• Script language programs

• Icon based programs

• DLLs for extending features

• Supporting CD-ROM or laser disc sources

• Supporting Video for Windows

• Hypertext

• Cross-Platform Capability• Runtime Player for Distribution.
11.Differnce between the following
(i) Frame Animation and Sprite Animation.
Animation means giving life to any object in computer graphics. It has the power of injecting energy and emotions into the most seemingly inanimate objects. Computer-assisted animation and computer-generated animation are two categories of computer animation. It can be presented via film or video.
The basic idea behind animation is to play back the recorded images at the rates fast enough to fool the human eye into interpreting them as continuous motion. Animation can make a series of dead images come alive. Animation can be used in many areas like entertainment, computer aided-design, scientific visualization, training, education, e-commerce, and computer art.
Frame Animation:
A keyframe is a frame where we define changes in animation. Every frame is a keyframe when we create frame by frame animation. When someone creates a 3D animation on a computer, they usually don’t specify the exact position of any given object on every single frame. They create keyframes.

Keyframes are important frames during which an object changes its size, direction, shape or other properties. The computer then figures out all the in-between frames and saves an extreme amount of time for the animator. The following illustrations depict the frames drawn by user and the frames generated by computer.
Sprite Animation: 
Sprite is a computer graphics term for a two-dimensional bitmap that is integrated into a larger scene, most often in a 2D video game
Originally sprites referred to independent objects that are composited together, by hardware, with other elements such as a background. The composition occurs as each scan line is prepared for the video output device, such as a CRT, without involvement of the main CPU and without the need for a full-screen frame buffer. Sprites can be positioned or altered by setting attributes used during the hardware composition process.  Examples of systems with hardware sprites include the Texas Instruments TI-99/4A, Nintendo Entertainment System. how many can be displayed on a single scan line (which is often a lower number), the dimensions and colors of each sprite, and special effects such as scaling or reporting pixel-precise overlap.
Use of the term sprite has expanded to refer to any two-dimensional bitmap used as part of a graphics display, even if drawn into a frame buffer (by either software or a GPU) instead of being composited on-the-fly at display time.
The act of manually creating sprites, as opposed to pre-rendering them or using digitized images, is a form of pixel art. It is sometimes referred to as spriting, especially in the hobbyist community.

(ii) Scripting Systems and Parameterized Systems.
Scripting Systems:
Common shell scripting languages such as bash or awk, traditional interpreted, general purpose languages like Python or Lua (or in the olden days, Perl or Tcl), and many custom, special purpose languages for expressing geometric models and their animation or RSL (the RenderMan Shading Language) that was used to express lighting and surface calculations, or "comet" or "ice", both of which were languages for compositing images
You can even use scripting languages to generate procedural models or geometry.   I've used Python to convert star catalogs from astronomy websites into simple models that we can render as background plates.   Historically Pixar used a language of their own called "mdl" (pronounced "muddle") which was an APL-like language that could describe geometry and animation.   This was the underlying technology of our tools: many interactive tools would generate mdl, which was then evaluated and displayed or rendered. 
There is also a significant component of film making which is similar to many other kinds of systems administration.   We have databases full of information about each shot asset in the show.   We have disks full of elements, models, and textures which need to be managed.   We have many thousands of jobs that execute and need to be tracked each day.   While we have some engineered applications to help with each of these tasks, often having a way for scripts to access these assets and organize them is useful.

Scripting languages are an important tool to help you get your job done efficiently.  Most TDs have at least some ability in a few common ones, and those that are really knowledgeable and productive can reap big rewards.

Parameterized Systems:
Parameterized Systems is the systems which permit objects motion features to be given as part of the object descriptions. The adjustable parameters control these objects features as degree of freedom, motion limitations and acceptable shape changes.
Allow object-motion characteristics to be specified as part of the object definitions object motion characteristics. 
For example scene, we can  separate the frames into individual compoments or objects called cells.
(iv) Computer generated and Computer Assisted Animation
 Computer generated
Computer animation is the process used for digitally generating animated images. The more general term computer-generated imagery (CGI) encompasses both static scenes and dynamic images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes, the target of the animation is the computer itself, but sometimes film as well.
Computer animation is essentially a digital successor to stop motion techniques, but using 3D models, and traditional animation techniques using frame-by-frame animation of 2D illustrations. Computer-generated animations are more controllable than other, more physically based processes, like constructing miniatures for effects shots, or hiring extras for crowd scenes, because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new image that is similar to it but advanced slightly in time (usually at a rate of 24, 25, or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.

Computer Assisted Animation:
Computer-assisted animation is animation that could not be completed without using a computer. Functions like in-betweening and motion capture are examples of computer-assisted animation.
Computer-assisted animation is common in modern animated films. Recent films such as “Beowulf” were created using computer-assisted animation. These techniques enhance modern animated films in ways not seen in film history.
12.Define acceleration in animation.
Acceleration is the rate of change of velocity as a function of time. It is a vector, meaning that it has both magnitude and direction. It is measured in meters per second squared or meters per second (the object's speed or velocity) per second.
In calculus terms, acceleration is the second derivative of position concerning time or, alternately, the first derivative of the velocity concerning time.
Acceleration—Change in Speed
The everyday experience of acceleration is in a vehicle. You step on the accelerator, and the car speeds up as increasing force is applied to the drive train by the engine. But deceleration is also acceleration - the velocity is changing. If you take your foot off the accelerator, the force decreases and velocity is reduced over time. Acceleration, as heard in ads, follows the rule of the change of speed (miles per hour) over time, such as from zero to 60 miles per hour in seven seconds.
13.Brief about illumination model in computer graphics. How do Ambient,Diffused and Specular reflections contribute to the overall intensity of light? Give mathematical expression for the same.

Light Sources

  • Point

·        Infinitely Distant

  • Radial Intensity Attenuation – (1/d2)

·        Problems: Point Source with 1/d2 attenuation does not always produce realistic results.

·        Produces too much intensity variation for near objects and not enough for distant.



·        Directional Light Sources and Spotlight effects




Angular Intensity Attenuation





Surface Lighting Effects

·        Diffuse Reflection

·        Specular Reflection

·        Ambient


Simple Illumination Model

Surfaces in real world environments receive light in 3 ways:
1.      Directly from existing light sources such as the sun or a lit candle
2.      Light that passes and refracts through transparent objects such as water or a glass vase
3.      Light reflected, bounced, or diffused from other exisiting surfaces in the environment

Local Illumination

·        Material Models
  • Diffuse illumination
Lambert's cosine law of reflection as shown in the above diagram:
1.      n; a normal vector to the surface to be illuminated.
2.      L, a vector from the surface position that points towards the light source.
3.      Il, an intensity for a point light source.
4.      k, a diffuse reflection constant.
Equation gives the brightness of a surface point in terms of the brightness of a light source and its orientation relative to the surface normal vector, n,
             

·        I is the reflected intensity
Measures how bright the surface is at that point.
·        Surface brightness varies as a function of the angle between n and L
When n and L coincide, the light source is directly overhead.
·        is at a maximum and cosq = 1.
As the angle increases to 90o, the cosine decreases the intensity to 0.
·        All the quantities in the equation are normalized between 0 and 1.
·        I is converted into frame buffer intensity values by multiplying by the number of shades available.
·        With 28 = 256 possible shades, we have 1 * 255, the brightest frame buffer intensity.
·        For n and L at an angle of 45 o, I = cos 45 o * 256 = 181.
  • An image rendered with a Lambertian shader exhibits a dull, matte finish.
  • It appears as if it has been viewed by a coal miner with a lantern attached to his helmet.
  • In reality, an object is not only subjected to direct illumination from the primary light source I, but secondary scattered light from all remaining surfaces.

  • Ambient illumination
·         Simple illuminated model is unable to directly accommodate all scattered light
·         It is grouped together as independent intensity, Ia.
·         The formula becomes
            

·         Iaka is the ambient illumination term, taking into account the additional environmental illumination, Ia, and the ability of the object to absorb it, ka.
·         Below Figure: Only ambient illumination
  • Below Figure: Diffuse + ambient illumination
  • Illumination decreases from the light source by I/d2.
  • Objects at a greater distance from the light source receive less illumination and appear darker.
  • Above equation is distance independent.
  • Dividing the Lambertian term by d2 would seem to get the physics right, but it makes the intensity vary sharply over short distance.
  • Modified distance dependence is employed, giving

·         d is the distance from the light source to the object

  • Specular Highlights – (Phong Reflection Model)
·         Regions of significant brightness, exhibited as spots or bands, characterize objects that specularly reflect light.
·         Specular highlights originate from smooth, sometimes mirrorlike surfaces
·         Fresnel equation is used to simulate this effect.
·         The Fresnel equation states that for a perfectly reflecting surface the angle of incidence equals the angle of reflection.
 

·         Most objects are not perfect mirrors.
o       some angular scattering of light.
o       If the viewer increases the angle (a ) between himself, the line of sight vector (S), and the reflectance vector (R), the bright spot gradually disappears.
o       Smooth surfaces scatter light less then rough surfaces.
o       This produces more localized highlights.

o       Building this effect into the lighting model gives
             
·         Specular reflectance term possesses a specular reflectance constant, ks.
·         The cosine term is raised to the nth power.
o       Small values of n (e.g. 5) distribute the specular highlights, characteristic of glossy paper.
o       High values of n (e.g. 50) are characteristic of metals.


1.      Simple Illumination Model
  • Deficiencies
    • point light source
    • no interaction between objects
    • ad hoc, not based on model of light propagation
  • Benefits
    • fast
    • acceptable results
    • hardware support.

13.Discuss the term “Sweep Representation” with examples.
Sweep representations are used to construct 3D object from 2D shape that have some kind of symmetry.
For example, a prism can be generated using a translational sweep and rotational sweeps can be used to create curved surfaces like an ellipsoid or a torus.
In both cases, you start with a cross-section and generate more vertices by applying the appropriate transformation.
More complex objects can be formed by using more complex transformations. Also, we can define a path for a sweep that allows you to create more interesting shapes.
Translational sweep:
i. Define a shape as a polygon vertex table as shown in figure 33 (a).
ii. Define a sweep path as a sequence of translation vectors figure 33 (b).
iii. Translate the shape; continue building a vertex table figure 33 (c).
iv. Define a surface table figure 33 (d).
enter image description here
Rotational sweep:
i. Define a shape as a polygon vertex table as shown in figure 34 (a).
ii. Define a sweep path as a sequence of rotations.
iii. Rotate the shape; continue building a vertex table as shown in figure 34 (b).
iv. Define a surface table as shown in figure 34 (c).
enter image description here

14.Explain Scanline method of visible surface detection in Computer Graphics.

Scan-Line Method

It is an image-space method to identify visible surface. This method has a depth information for only single scan-line. In order to require one scan-line of depth values, we must group and process all polygons intersecting a given scan-line at the same time before processing the next scan-line. Two important tables, edge table and polygon table, are maintained for this.
The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of each line, and pointers into the polygon table to connect edges to surfaces.
The Polygon Table − It contains the plane coefficients, surface material properties, other surface data, and may be pointers to the edge table.
Scan-Line Method
To facilitate the search for surfaces crossing a given scan-line, an active list of edges is formed. The active list stores only those edges that cross the scan-line in order of increasing x. Also a flag is set for each surface to indicate whether a position along a scan-line is either inside or outside the surface.
Pixel positions across each scan-line are processed from left to right. At the left intersection with a surface, the surface flag is turned on and at the right, the flag is turned off. You only need to perform depth calculations when multiple surfaces have their flags turned on at a certain scan-line position.
15. Fhe transformation Av which aligns a given vector V with the vector  K along the positive Z-axis.
Let the columns of AvAv be a1,…,ana1,…,an. Then, AvvAvv can be written as v1a1+…vnanv1a1+…vnan. To make this equal to an arbitrary vector kk, you just need to equate the above expression to kk.
v1a1+…vnan=kv1a1+…vnan=k
This is an underdetermined system, so it has infinite number of solutions. In particular, if AvAv is an m×nm×n matrix and kk is an nn-dimensional vector, then there are mnmn unknowns and nn equations. So there are (m−1)n(m−1)n degrees of freedom.
enter image description here
The given vector is <html><body>
V = ai + bj + ck
</body></html>
a=0 , b=1 , c=1
alsoenter image description here
Step 1: Rotation of vector V about X-axis by an angle θ=+ θ1 {+ve indicates CCW}
Now, <html><body>
V1 = bj + ck
</body></html>
enter image description here
enter image description here
enter image description here
Now, the rotational matrix about x-axis is,
enter image description here
On substituting the values of sin θ1 and cosθ1, we get,
enter image description here
Step 2: Rotate vector V1 about Y-axis by an angle of θ=- θ2 So as to consider with z-axis
enter image description here
Now, rotational matrix about y-axis is,
enter image description here
enter image description here
Now, combined transformation, [Av] = [Rx] [Ry]
enter image description here
This is the matrix [Av] which aligns a given vector <html><body>
V = ai + bj + ck
</body></html> with the vector k along positive z-axis

16.Magnify the triangle with vertices A(0,0), B(1,1) and C(5,2) to twice its size while keeping C(5,2) fixed.
ΔCDE = C(5,2) , D (-5 , -2) , E  (-3 ,0)
Magnify the triangle with vertices a(0,0), b(1,1) and c(5,2) to twice its size while keepingc(5,2) fixed.

Current triangle is Δ CAB

we will make Δ CDE

Δ CAB Δ CDE

CA/CD = CB/CE = AB/DE = 1/2

CD = 2CA

=> A will be midpoint of CD

(0 , 0) = (Dₓ + 5)/2  , (Dy + 2)/2

=> Dₓ = -5   Dy = -2

D (-5 , -2)

Similarly CE = 2CB

=> B will be midpoint of CE

(1 , 1) = (Eₓ + 5)/2  , (Ey + 2)/2

Eₓ = -3   Ey = 0

E = (-3 ,0)

ΔCDE = C(5,2) , D (-5 , -2) , E  (-3 ,0)

Verification

CA = √29

CB = √17

AB = √2

CD = √116 = 2√29 = 2CA

CE = √68 = 2√17 = 2CB

DE = √8 = 2√2 = 2AB

17.Explain Sutherland-Hodgman polygon clipping algorithm.
A polygon can be clipped by processing its boundary as a whole against each window edge. This is achieved by processing all polygon vertices against each clip rectangle boundary in turn. beginning with the original set of polygon vertices, we could first clip the polygon against the left rectangle boundary to produce a new sequence of vertices. The new set of vertices could then be successively passed to a right boundary clipper, a top boundary clipper and a bottom boundary clipper, as shown in figure (l). At each step a new set of polygon vertices is generated and passed to the next window boundary clipper. This is the fundamental idea used in the Sutherland - Hodgeman algorithm.
enter image description here
The output of the algorithm is a list of polygon vertices all of which are on the visible side of a clipping plane. Such each edge of the polygon is individually compared with the clipping plane. This is achieved by processing two vertices of each edge of the polygon around the clipping boundary or plane. This results in four possible relationships between the edge and the clipping boundary or Plane. (See Fig. m).enter image description here
  1. If the first vertex of the edge is outside the window boundary and the second vertex of the edge is inside then the intersection point of the polygon edge with the window boundary and the second vertex are added to the output vertex list (See Fig. m (a)).
  2. If both vertices of the edge are inside the window boundary, only the second vertex is added to the output vertex list. (See Fig. m (b)).
  3. If the first vertex of the edge is inside the window boundary and the second vertex of the edge is outside, only the edge intersection with the window boundary is added to the output vertex list. (See Fig. m (c)).
  4. If both vertices of the edge are outside the window boundary, nothing is added to the output list. (See Fig. m (d)).
Once all vertices are processed for one clip window boundary, the output list of vertices is clipped against the next window boundary. Going through above four cases we can realize that there are two key processes in this algorithm.
  1. Determining the visibility of a point or vertex (lnside - Outside test) and
  2. Determining the intersection of the polygon edge and the clipping plane.
One way of determining the visibility of a point or vertex is described here. Consider that two points A and B define the window boundary and point under consideration is V, then these three points define a plane. Two vectors which lie in that plane are AB and AV. If this plane is considered in the xy plane, then the vector cross product AV x AB has only a component given by
enter image description here
The sign of the z component decides the position of Point V with respect to window boundary.
If z is:
Positive - Point is on the right side of the window boundary.
Zero - Point is on the window boundary.
Negative - Point is on the left side of the window boundary.
Sutherland-Hodgeman Polygon Clipping Algorithm:-
  1. Read coordinates of all vertices of the Polygon.
  2. Read coordinates of the dipping window
  3. Consider the left edge of the window
  4. Compare the vertices of each edge of the polygon, individually with the clipping plane.
  5. Save the resulting intersections and vertices in the new list of vertices according to four possible relationships between the edge and the clipping boundary.
  6. Repeat the steps 4 and 5 for remaining edges or the clipping window. Each time the resultant list of vertices is successively passed to process the next edge of the clipping window.
  7. Stop.
Example :- For a polygon and clipping window shown in figure below give the list of vertices after each boundary clipping.
enter image description here
Solution:- Original polygon vertices are V1, V2, V3, V4, and V5. After clipping each boundary the new vertices are as shown in figure above.
enter image description here
18. Write a short note on basic ray tracing algorithm.
The simplest use of ray tracing is to produce images similar to those produced by the z-buffer and BSP-tree algorithms. Fundamentally, those methods make sure the appropriate object is “seen” through each pixel,and that the pixel color is shaded based on that object’s material properties, the surface normal seen through that pixel, and the light geometry. 

Figure 10.1.
 The borders of the window have simple coordinates in the uvw-coordinate system with respect to origin e. Figure 10.1 shows the basic viewing geometry for ray tracing.
Figure 10.2.
The geometry is aligned to a uvw coordinate system with the origin at the eye location i.e. The key idea in ray tracing is to identify locations on the view plane at w = n that correspond to pixel centers, as shown in Figure 10.2.
 A “ray,” really just a directed 3D line, is then sent from e to that point. We then “gaze” in the direction of the ray to see the first object seen in that direction.
 This is shown in Figure 10.3, where the ray intersects two triangles, but only the first triangle hit, T2, is returned. 10.2. Computing Viewing Rays 203 Figure 10.2. The sample points on the screen are mapped to a similar array on the 3D window. A viewing ray is sent to each of these locations. The structure of the basic ray tracing program is: Compute u, v, w basis vectors for each pixel do compute viewing ray find first object hit by ray and its surface normal n set pixel color to value based on material, light, and n The pixel color can be computed using the shading equations of the last chapter. Figure 10.3. The ray is “traced” into the scene and the first object hit is the one seen through the pixel. In this case, the triangle T2 is returned.
19.Difference between defused and specular reflections.
The amount of light reflected by an object, and how it is reflected, is highly dependent upon the smoothness or texture of the surface. When surface imperfections are smaller than the wavelength of the incident light (as in the case of a mirror), virtually all of the light is reflected equally. However, in the real world most objects have convoluted surfaces that exhibit a diffuse reflection, with the incident light being reflected in all directions. This interactive tutorial explores how light waves are reflected by smooth and rough surfaces.
The tutorial initializes with a beam of white light (represented by a spectrum composed of all wavelengths between 400 and 700 nanometers) being reflected by a diffuse, or rough, red surface demonstrating Diffuse Reflection. In order to operate the tutorial, use the slider bars to adjust the color and texture of the surface appearing in the window between a range of zero (smooth) and 100 percent (maximum roughness). By translating the Surface Color slider, the color of the grid-laden surface is altered to produce corresponding changes in the wavelength spectrum of light reflected from the surface. When the slider labeled Surface Roughness is moved to the right, the texture of the surface becomes more irregular and light is reflected at a greater number of angles and wavelengths. Moving the slider to the left produces a progressive smoother surface. At the far left boundary of the Surface Roughness slider, the surface becomes totally flat and exhibits specular reflection of all incident wavelengths that match the color of the surface.
Most things that we see (people, cars, houses, animals, trees, etc.) do not themselves emit visible light but reflect incident natural sunlight and artificial light. For instance, an apple appears a shiny red color because it has a relatively smooth surface that reflects red light and absorbs other non-red (such as green, blue, and yellow) wavelengths of light. The reflection of light can be roughly categorized into two types of reflection: specular reflection is defined as light reflected from a smooth surface at a definite angle, and diffuse reflection, which is produced by rough surfaces that tend to reflect light in all directions (as illustrated in Figure 1). There are far more occurrences of diffuse reflection than specular reflection in our everyday environment.
To visualize the differences between specular and diffuse reflection, consider two very different surfaces: a smooth mirror and a rough reddish surface. The mirror reflects all of the components of white light (such as red, green, and blue wavelengths) almost equally and the reflected specular light follows the same angle from the normal, as does the incident light. The rough reddish surface, however, does not reflect all wavelengths because it absorbs most of the blue and green components, and reflects the red light. Also, the diffuse light that is reflected from the rough surface is scattered in all directions.
Perhaps the best example of specular reflection, which we encounter on a daily basis, is the mirror image produced by a household mirror that people might use many times a day to view their appearance. The mirror's smooth reflective glass surface renders a virtual image of the observer from the light that is reflected directly back into the eyes. This image is referred to as "virtual" because it does not actually exist (does not produce light) and appears to be behind the plane of the mirror due to an assumption that the brain naturally makes. The way in which this occurs is easiest to visualize when looking at the reflection of an object to one side of the observer, so that the light from the object strikes the mirror at an angle and is reflected at an equal angle to the viewer's eyes. As the eyes receive the reflected rays, the brain assumes that the light rays have reached the eyes in a direct straight path. Tracing the rays backward toward the mirror, the brain perceives an image that is positioned behind the mirror. An interesting feature of this reflection artifact is that the image of an object being observed appears to be the same distance behind the plane of the mirror as the actual object is in front of the mirror.
20.Difference between orthographic and oblique projections.
They both seem to be a parallel projection. I know that the angles of axes of viewing differ based on the type of orthographic projection (eg, an isometric projection has equal angles for all the axes).
They're both similar, in that they are both parallel projections (lines that are parallel in the source are parallel in the projection). In a parallel projection of (x, y, z) onto the xy plane becomes (x + az, y + bz, 0). When a and b are equal, the projection is orthographic; otherwise the projection is oblique.
Another way to look at it is that in an orthographic projection, the projector lines intersect the plane being projected on to at a perpendicular angle (thus, they are orthogonal, thus the name of the projection), whereas in an oblique projection those lines form oblique angles (non-right angles) with the projection plane.

21.Find the form of matrix for reflection about a line L with slope m and y interecept(0, b).
1. First translation transformation is done by -b so that the line passes through origin.

2. Then we need to coinscide the line with x-axis, for this we will find angle of rotation Theta, we know tan (Theta) =m, using this equation we will find the value of Theta. Then rotation by -Theta is done and our line coincides with the x -axis.

3. now reflection about x-axis is done.

4. After performing reflection we have to perform the reverse rotation, so rotation by angle Theta is done.

5. Now perform reverse translation by b to get the total transformation matrix for reflection about line L having slope m and y intercept as (0, b).

Thus the total transformation matrix will be T(-b) * R(-Theta) * Ref(x-axis) *R(Theta) * T(b).

Having the values of m and b and calculate angle Theta, and put the values in above tranformation matrices to get the final matrix  for reflection about the line L.
22.Explain Cyrus-Beck line line clipping algorithm.
The Cyrus–Beck algorithm is a generalized line clipping algorithm. It was designed to be more efficient than the Cohen–Sutherland algorithm, which uses repetitive clipping.[1] Cyrus–Beck is a general algorithm and can be used with a convex polygon clipping window, unlike Sutherland–Cohen, which can be used only on a rectangular clipping area.
Here the parametric equation of a line in the view plane is

 Now to find the intersection point with the clipping window, we calculate the value of the dot product. Let pE be a point on the clipping plane E.


Cyrus–Beck algorithm
Calculate {\displaystyle \mathbf {n} \cdot (\mathbf {p} (t)-\mathbf {p} _{E})}:
if < 0, vector pointed towards interior;
if = 0, vector pointed parallel to plane containing p;
if > 0, vector pointed away from interior.
Here n stands for normal of the current clipping plane (pointed away from interior).
By this we select the point of intersection of line and clipping window where (dot product is 0) andhence clip the line.



Recommended Post