Tuesday, June 25, 2013

Pizzas and Area estimation of images

Last time, we tried to manipulate images by converting them to black and white and grayscale. These methods are helpful in measuring area covered by a region in an image. By having a "binary" image, that is, black and white, we will be able to represent the region that we want to observe in a manner that it becomes distinct from the background.

Once we have selected the region of interest, we will slice like a pizza!
Yummy! [1]

Any shape can be divided into several slices, such as in the following image:

Figure 1: Map of the United States of America. Or better yet, United Slices of America. [2]

By separating them into smaller triangles/ slices of pizza, we can approximate the area of the entire region by simply adding the area of all the triangles. But it would be troublesome to have weird shapes that will not provide us simple triangular shapes, as what you might see at the upper right part of Figure 1. If we were to measure such a region, we should segment it not just radially. Can't imagine how to solve it? Me neither. Well, not for now. Maybe next time. :P

So here, we will be limited to the application on simple shapes generated using Paint. Why Paint? For one, I don't have Photoshop right now. Another is that GIMP can't produce shapes. And another reason is that Paint can provide the dimensions of the shape that I'm creating. Well, Paint gives the length and width of the rectangular region immediately around the shape, which is what I'm after. Using these measurements, I'm able to compute for the actual size of the shape. With Scilab, I was able to compute for the area of the image by following this flowchart:

Figure 2: Flowchart of measuring the area of a shape

I tested the code that I've written on Scilab on a circle with a diameter of 300 pixels. Also, I tried placing the circle on different locations on the image to see if it would matter (which probably won't).
Figure 3. Circles with diameters of 300 pixels on a 500x500 pixel image. As will be used in the following table, the images are labeled according to their location: (a) center, (b) upper left, (c) lower left, and (d) bottom.

In Figure 3d, the circle is tangent with the border of the image, making the outline of the circle discontinuous in a sense.

Table 1. Actual and measured area of a circular region on an image

Yes, as expected, placing the circle away from the center will not give a difference on the area measured by the code. A small difference was observed on Figure 3d. When I was doing that image on Paint, I simply dragged the circle towards the edge of the image. Perhaps I did not notice that some parts of the circle went outside the image.

Or maybe it was due to the connection at the edge of the image. So I went on investigating about it by creating this image:

Figure 4. Semicircle with diameter of 300 pixels on a 500x500 pixel image

This image was also constructed using Paint. There was a 10.97% difference from the actual area of 35342 square pixels when the area was approximated using the Scilab code. Why is there a large discrepancy? I figured out that the edge() function was not able to detect the flat side of the semicircle positioned on the boundary of the image. To be sure, I plotted the points determined by the edge() function to be the edge of the white shape:

Figure 5. Location of the points determined by the edge() function to be the boundary of the semicircle that has a diameter of 300 pixels.

In Figure 5, the lines connect all the points on the edge. Seeing that there is a low density of lines at the upper portion, there should be a small number of edge points on that part. Also, even after zooming in at the upper boundary, there were no distinct connection of lines. With that being said, the code perhaps divided the semicircle to thin slices of pizza at the bottom part. At the upper portion, it created only one triangle with one vertex at the centroid of the semicircle, and the other two vertices at the right and left corner of the flat side. Having large slices of pizzas will decrease the precision of the approximation.


Application
Our house is located on a trapezoidal lot, which makes it difficult for me to describe the dimensions of our lot whenever my friends ask about it. With the help of Google maps, I was able to capture an image of our residential area:
Figure 6. Image of our residential area [3]

I saved a 941x552 square pixel image of this using the built-in Snipping Tool of Windows 7. From Paint, I was able to determine the ratio of meters to pixels by drawing a line over the scale bar. A ten meter distance on the image corresponds to a 77 pixels distance on the image.

Have you guessed where our house is? Of coures not. XD So here it is, with the lot covered by a red trapezod:
Figure 7. Our lot marked by the red trapezoid

Figure 8. Conversion of Figure 6 to black and white

Using the same code, the area was estimated to be 37776 square pixels, which would correspond to 637 square meters. Again, with Paint, I estimated the area of the trapezoid, assuming that the two bases (lower left and upper right edges) are parallel to each other. With that method, I measured the area to be 36206 square pixels, resulting to a 4.33 percent difference with the previous value.

How should I measure the actual value?

Mothers know best.

So I asked my mom about the area of our lot, and she said that it's 678 square meters. I suppose it was based on the land title. Which makes my approximation different from the actual area by 6.02%. Not bad! I guess. 

The discrepancy between the two values could have been minimized if our lot has a distinct fence. Heck, our lot is filled with trees at the edges, which made it difficult for me to estimate the actual boundary of our lot. 

Or maybe it was due to the uncertainties in the scaling factor.

But with that percent difference, I think my code works just fine. So I'll give myself a grade of 10/10 for this activity. Yeah!



Before I forget, I would like to thank Dr Soriano for discussing the flow of the computation for the area measurement.

By the way, here is the code that I have written to perform the area approximation. Feel free to use it :D
circle = imread("C:\Users\Akared\Desktop\AP 186 Act 4\circle300_center.bmp");
circle_gray = rgb2gray(circle);
edge_circle = edge(circle_gray);
[c_val, r_val] = find(edge_circle);
sum_x = sum(c_val);
sum_y = sum(r_val);
Xc_temp = sum_x/size(c_val);
Yc_temp = sum_y/size(r_val);
Xc = int(Xc_temp(2)); //centroid
Yc = int(Yc_temp(2)); // centroid
X_new = c_val - abs(Xc);
Y_new = r_val - abs(Yc);
R = sqrt(X_new.^2 + Y_new.^2);
theta = atan(Y_new, X_new);
X_final = zeros(theta);
Y_final = zeros(theta);
max_theta = abs(max(theta));
add_this = max_theta*100;
for k=1:length(theta)
    min_loc_temp = find(theta==min(theta));
    min_loc = min_loc_temp(1);
    theta(min_loc) = theta(min_loc) + add_this;//replace the current minimum to locate the next minimum on the next iteration
    X_final(k) = X_final(k) + X_new(min_loc);
    Y_final(k) = Y_final(k) + Y_new(min_loc);
end
total_area = 0.0;
for k=1:(length(theta)-1)
    total_area = total_area + (X_final(k)*Y_final(k+1) - X_final(k+1)*Y_final(k));
end
//we still have to add the area covered by the first and last pair of pixel coordinates
total_area = total_area + X_final(length(X_final))*Y_final(1) - X_final(1)*Y_final(length(Y_final));
total_area = total_area/2.0;
disp(total_area)

References
[1] http://randommealoftheday.blogspot.com/2011/07/pepperoni-pizza-at-mammas-brick-oven.html
[2] http://slice.seriouseats.com/archives/2010/10/where-is-the-best-pizza-in-miami-florida-fl.html
[3] https://maps.google.com/
[4] Soriano, M. A5 - Area estimation of images with defined edges 2013 activity manual.




Edit (June 26, 2013):
In case you're wondering how the function edge() works on Figure 3a, here it is:
Figure 9. Plotting the output of the edge() function after adjusting to the centroid.

Since it outputs a series of pixel coordinates from the top left to bottom right of the image, the lines connecting consecutive values cross the inside of the shape. Which forced me to sort them in a manner that the order follows the outline of the shape, involving the computation of the angle with respect to the +x axis. 

After sorting, I was able to create the following image:
Figure 10. Connecting the edges following the outline

So from there, I was able to create triangles/pizzas with the 2 vertices lying on the outline and the third on the centroid.

You might also notice that there is a discontinuity on the outline shown in Figure 10. If you inspect the Scilab code that I wrote, after summing over all the values of the pixel coordinates (ends at line 41), I also added a triangular area with the outside vertices connecting the discontinuity (line 53) to complete the entire area.

Yeah!

Tuesday, June 18, 2013

Image type and file formats

In this activity, we work with different types of images and how to manipulate them using an image processing software. First, we look at the properties of some images using Scilab and the function imfinfo of the Scilab Image and Video Processing (SIVP) toolbox. Using the help() function of Scilab, the parameters and output of the function inside the parentheses are described.

File name String containing the image file name.
File size Integer indicating the size of the image file in bytes.
Width Integer indicating image width in pixels.
Height Integer indicating image height in pixels.
Bit depth Integer indicating the number of bits per pixel .
Color Type String containing the color type of the image file, which can be 'grayscale' or 'truecolor'.

Also, along with the output of imfinfo(), I've also included the information from the properties of the image (by using the right click on the image).
Figure 1. Grayscale image [1]

Filename  C:\Users\Akared\Desktop\AP 186 Act 3\bleach2.bmp  
FileSize             60062  
Width                   800
Height                  600
BitDepth                  8 
ColorType   grayscale

Figure 2. Image properties of Figure 1.


Figure 2. Image of Saturn taken by the author on April 14, 2013.

The image was obtained using a Skywatcher 150mm reflecting telescope and a Samsung PL120 point-and-shoot camera set to video recording mode at 30 frames per second. The video file, 15 seconds long, was split to individual frames and processed using Registax 6.

Filename  C:\Users\Akared\Desktop\AP 186 Act 3\saturn2.tif  
FileSize              256348  
Width                      381  
Height                     168 
BitDepth                     8  
ColorType       truecolor

Figure 4. Image properties of Figure 3.

This was the video file used to obtain the image of Saturn in Figure 3:


The camera was not properly aligned with the axis of Saturn's rings and so the video that was recorded showed a tilted view of Saturn. In the video, the outline of Saturn was blurred due to atmospheric effects. By stacking the 450 frames from the 15-second video, the details on the planet became distinct. Since the effects of the atmosphere is different in each frame, averaging all those frames will cancel out the effect of the atmosphere. Also, you will now be able to see the Cassini division on the ring of Saturn, which is the gap between the inner and outer region of the ring [3].

Figure 5. Taken using Canon EOS 500D.

Filename:  C:\Users\Akared\Desktop\AP 186 Act 3\nip_scope.JPG  
FileSize         4915644
Width                 4752  
Height                3168
BitDepth                  8 
ColorType   truecolor  

Figure 6. Image properties of Figure 5.



Figure 7. GIF animation of the cutest Pokemon [2]


Figure 8. Image properties of Figure 7.

Image File Formats
Here, I'll be discussing a brief history of some image file formats and their uses.

1977-1978: Abraham Lempel and Jacob Ziv published papers on constructing dictionaries for image compression. These were named LZ77 and LZ78 compression, which became the platform for image compression techniques. Terry Welch further improved these dictionaries and created the LZW compression. [4]

1987: Graphics Interchange Format (GIF) was created by CompuServe, a company working on distributing small networks between a group of computers. GIF’s were based on the LZW compression. Before being used as a way to animate images, the GIF served as a way to save time in sending and collecting information which have repeating contents. [5] This is seen in Figure 7, where the GIF is created from the sequence of frames from the video of the anime Pokemon. From the frames of the video, information only varies in the region where the hands of Pikachu touch the cheeks. Thus there is only a need to note these variations on specific pixels, with the pixels outside this region unchanging, eliminating the redundancy of information on the latter region.

Color indexing is used on GIF files to minimize the number of bytes for the information of each pixel. This restricts the color of GIF to 256 colors. Also, one color can be assigned as transparent such that when the GIF image is placed on top of another image, the colors of the image below can be seen on the transparent region. [4]

Mid 1980’s to early 1990’s: JPEG compression was developed. The Joint Photographic Experts Group developed this method of lossy compression, hence the name of the image file format. JPEG compression works on the frequency domain of the image, but instead of using Fourier transform, Discrete Cosine Transform (DCT) is implemented (which is roughly similar to FT). Compression is performed on the frequency components of each pixel by allowing a smaller amount of possible values for higher-frequency components of the image. [4] Since it acts on the value of high-frequency components, there is no perceived difference on the resulting compressed image. 

1994: The patent for LZW compression was given to the company Unisys. The company demanded GIF users to pay royalties to the company, thus making GIF unpopular [5]. Since there was no patent on the LZ77 compression, it was developed to create the deflate compression which would be used for creating Portable Network Graphic (PNG) files. [4]

One of the features of PNG’s is the presence of the filter byte, which is apart from the image data. With the filtering of data, compressibility is increased. Furthermore, the amount of information in the image is not reduced, making it a lossless image compression.

PNG’s, unlike GIF’s, are not restricted to 256 colors. Special effects can also be added on the image, aside from partial transparency, using the alpha channels of the PNG. [4]

2004: The use of GIF’s became free after the patent for the LZW compression expired.

Aside from these file formats, we also have the Tag Image File Format (TIFF) and BMP (or the Microsoft Windows Bitmap). These two image file formats are usually uncompressed. Also, together with PNG's, these three file formats can present full-color images, as they all support 24-bit color, as compared to GIF, which only has 8-bit color. [4]

Canon and Nikon have their own file formats. Raw images have extensions such as .crw and .cr2 (Canon) and .nef (Nikon). The advantage of this is that the data is stored directly after the analog to digital conversion occuring at the sensor of the camera. With this, processing can be performed outside the imaging device for more flexibility on the manipulation of the image.

Playing around with ImageJ

In this section we will be using the following image:


Figure 9: Star trail of Ursa Major (and other constellations beside it). 

The image was created by capturing 190 images, each with a 10-second exposure and 10-second interval on a Nikon D5100 DSLR. The trails of the stars show their one-hour motion across the celestial sphere. The images were processed using the program Startrails and enhanced using Photoshop CS5.

I first used converted Figure 9 into binary:

Figure 10. Conversion to binary of Figure 9.

This was achieved by selecting the "Make Binary" command under the "Binary" subsection of the "Process" menu.
Figure 11. Locating the binary conversion on ImageJ.

The image can also be converted to grayscale in the following manner:
Figure 12. Changing the image into grayscale using the 8-bit conversion 

which yields this:
Figure 13. Grayscale image of Figure 9.


Figure 14. Histogram of Figure 13.

The 8-bit conversion of ImageJ converts the image linearly from 0 to 255 [3]. The contrast can be achieved by going to this command under the "Image" menu:

Figure 15. Brightness and contrast enhancement command of ImageJ

In this activity, I have set the adjustments of the brightness and contrast to automatic. To vary the contrast, I simply adjusted the first horizontal scroll bar on the Brightness and Contrast options.

Figure 16. Adjusting the brightness and contrast.

These images, each with a different contrast value (indicated above each image):
Figure 17. Varying the contrast values for the 8-bit conversion

when saved as JPEG files have decreasing file size:
 Figure 18. File size versus scaling factor for the  8-bit conversion. The data points were fitted with a linear trend.

It can be seen that as the scaling factor is increased, with the image becoming darker, the file size decreases. The $R^2$ value of 0.8935 tells us that the relationship between the two is not linear.

Figure 19. File size versus scaling factor for the  8-bit conversion. The data points were fitted with an exponential trend.

With the 0.9495 $R^2$ value, we can roughly say that the file size has an exponential dependence on the scaling factor for this image. To verify this, maybe I'll try this on other images next time.

Going back to Scilab

Aside from the function imfinfo(), I also explored the use of the function imread(), gray_imread(), imshow(), imwrite()

With imread(), a WxHxB matrix is given out, where W is the number of pixels across the width of the image, H is the number of pixels across the height of the image, and B is the number of channels in the image. For a truecolor image, B has a value of 3. Each element of the matrix has a value corresponding to the color of the pixel. In comparision, gray_imread() gives only a WxH matrix since the image is converted into grayscale before obtaining the pixel value which would then be displayed in each element. The function im2bw(a, b) converts the image a to black and white with b as the threshold value (from 0 to 1).

The function imshow() simply displays the image enclosed in the parenthesis while imwrite(a, b) saves the created image a on to the destination specified by b.


For this activity, I would grade myself a 10/10 plus an additional 2 points. I believe that I have thoroughly discussed my results and presented them well.

References
[1] http://www.wall321.com/Anime/Bleach/bleach_bankai_ichigo_1600x1200_wallpaper_8031
[2] http://24.media.tumblr.com/aea23c88cf928413a84c795c95dad56e/tumblr_mgipztYadH1rkk672o1_400.gif
[3] Saturn's Cassini Division. Retrieved from http://starchild.gsfc.nasa.gov/docs/StarChild/solar_system_level2/cassini_division.html on June 19, 2013.
[4] Chapman, N. and J. Chapman. (2009). Digital Multimedia. 3rd edition. John Wile & Sons Ltd., England.
[5]GIF: a technical history. Retrieved from http://enthusiasms.org/post/16976438906 on June 19, 2013.
[6 ] The PNG Image File Format. Retrieved from http://www.fileformat.info/format/png/corion.htm on June 19, 2013.


Thursday, June 13, 2013

Creating synthetic images using Scilab



In this activity, we created two-dimensional images using Scilab. The following images were generated, along with the Scilab code:
Figure 1. Circular aperture of radius 0.6

Figure 2. Annulus with inner radius of 0.4 and outer radius of 0.7


 Figure 3. Square of dimension 1.0x1.0 centered at (0,0)



Figure 4.  Sinusoidal pattern along the x-axis (corrugated roof). 

The frequency of the sinusoidal pattern was set to three. This means that there will be three peaks within one unit. As you can see, there are three bright bands from -1 to 0 and another three from 0 to 1. Also, the value was scaled to zero to one so that the lowest value of the sine wave (which is now zero) corresponds to the lowest value of the color map and the highest value (which is now one) corresponds to the highest value of the color map. 

Figure 5.  Three-dimensional representation of the sinusoidal pattern along the x-axis (corrugated roof). 

Using the plot3d function of Scilab, Figure 5 was produced. The code snippet is as follows:
nx = 100;
ny = 100;
x = linspace(-1, 1, nx);
y = linspace(-1, 1, ny);
[X, Y] = ndgrid(x,y);
A = zeros(nx, ny);
frequency = 3;
A = (sin(2*frequency*%pi*X)+1)/2;
plot3d(x, y,A)
Figure 6. Grating with spacing of 50 units. Since the entire x-axis of the image has 500 units, the code produced 5 pairs of white and black bands to create the grating pattern.

Figure 7. Circular aperture with a Gaussian gradient. The standard deviation of the Gaussian function was 0.5

Recall the Gaussian distibution function:
$\begin{equation}
f(r) = A e^{-\frac{(r-B)^2}{2 \sigma^2}}
 \end{equation}$

With A = 1 and B = 0, the circular aperture is centered at (0,0) with the maximum amplitude of the gradient equal to 1. The line "A(find(r>0.7)) = 0;" simply sets the boundary of the aperture. Without this, we will see a continuously fading image from the center outward.

Also, the pattern can be changed by varying the $\sigma$:
Figure 8. Varying the standard deviation of the Gaussian distribution function. The standard deviation increases from the top left to top right then from bottom left to bottom right.

In this activity, I would rate my performance as 10/10. My classmates, Eric Limos and Alix Santos, were very kind to help me in learning Scilab and performing the activity.

References:
1. Soriano, M. Applied Physics 186 A3 - Scilab basics.

Tuesday, June 11, 2013

Digital Scanning

For the second activity in Applied Physics 186, we practiced digitizing hand-drawn graphs. As an overview, the flowchart below was followed.


Figure 1. Flowchart for the activity

From the MS Physics thesis entitled “Computer simulation of the focusing properties of selected solar concentrators”, submitted by Zenaida B. Domingo in April 1980, the following graph was scanned using a Canon CanoScan LiDE 100 scanner:


Figure 2. Scanned image from Figure 4.12 (Profile of energy distribution along the y-axis) of the thesis entitled “Computer simulation of the focusing properties of selected solar concentrators” (Domingo, 1980). The x-values of the plot correspond to distance while the y-values correspond to relative intensity values.

To know a bit about the context of the graph, the abstract of the thesis is as follows:
"An analytical procedure is derived for determining the distribution of energy at the focal plane of solar concentrating collectors, specifically mirror collectors. This is based on Paul Mazur’s exact double integral which gives the relative intensity distribution as a function of the emitting surface and collecting surface configurations. This double integral is then solved by computer simulation using numerical methods, i.e., Simpson’s Rule for double integration with automatic halving of interval. 
Two solar concentrating collectors are selected – the parabolic mirror collector and the hemispherical bowl collector. From the computer output, the most effective absorber shape, size and location are then deduced. The accuracy of the results is tested by comparing with existing facts on said collectors based on ray-tracing techniques and actual performance."
The scanned image has a size of 1.99MB with dimensions of 3397 x 4304 pixels (width x height). The horizontal and vertical resolutions were both 400 dpi.

However, there was something funny about the scanned graph. As you can see, the intervals on the y axis were 0.25, 0.25, and 0.50. The third interval was weird because its value is twice that of the previous intervals but the separation of the tick marks on the y axis did not change. To be safe, I assumed that the third bar/ tick mark on the said axis has a value of (0.75, 0.0).

The tick marks on the axes of the scanned graph were located and the corresponding pixel coordinates on the image was obtained. Using Paint, the pointer was dragged to the tick marks and the pixel coordinates were recorded. As an example, the figure below shows how Paint outputs the pixel coordinates. The point (0.0, 3.0), as pointed by the yellow arrow has a corresponding pixel coordinates of (x,y) = (1121,3535). 
Figure 3. Determining the pixel coordinates of the tick mark of value (0.0, 3.0) indicated by the yellow arrow and black circle. The yellow box at the lower left contains the pixel coordinate when the pointer was hovered above the said tick mark.

For the tick marks on the x-axis, the distance values were plotted against the x values of the pixel coordinates and a linear fit was made. The values for the relative intensities were plotted against the y values of the pixel coordinates for the tick marks on the y-axis and again a linear fit was performed. Note that I have performed a change of readings in the y values of the pixel coordinates. That is, instead of having a value of 0 through 4304 from the top to bottom, I have subtracted the y values from 4304 to have a reading from bottom to top. This makes it easier to visualize the points on the graph with reference to the origin of the graph (increasing relative intensity values from bottom to top). The following figures show the equations of the fitting and their corresponding $R^2$ values.
Figure 4. Plot of the value of the tick marks in the distance axis (from the scanned graph) against the x values of the pixel coordinates
Figure 5. Plot of the value of the tick marks in the relative intensity axis (from the scanned graph) against the y values of the pixel coordinates


With these fitting functions, we now have scaling functions for both x and y pixel values. The pixel coordinates of the curve on the scanned graph were obtained using the same method. The y values of the pixel coordinates were also subtracted from the 4304 value. The two fitting functions were used to retrieve the corresponding relative intensity value and distance value of each point on the curve.
Figure 6. Reconstructed plot from the scaled values, based on the pixel coordinates and fitting functions

To compare with the scanned graph, I cropped the area of the entire plot (enclosed by the axes) from the scanned graph, and performed the following procedure:
1. Copy the cropped area

2. Click over the reconstructed plot, then click “Format Plot Area”

3. Under the “Fill” section, select “Picture or texture fill” and insert from “Clipboard”.

Final output:
Figure 7. Superposition of the reconstructed graph and the scanned graph. The blue diamond marks indicate the computed pairs of distance and relative intensity values for each pixel coordinate obtained from the curve of the scanned graph.

Boom! We now see a superposition of the scanned graph and the reconstructed graph. As you might have observed, the original graph (scanned) does not indicate the highest value or the boundary of relative intensity. What I did was determine the pixel coordinates of the top of the cropped area and solved for the maximum relative intensity. In this manner, I was able to set the maximum value of the reconstructed plot to correspond to the highest value of the cropped area on the hand-drawn graph.

There are some discrepancies in the superposition, which can be attributed to the uncertainties in the fitting function. Obtaining these uncertainties are quite tedious since the method that I learned from Physics 192 involves a lot of steps. Maybe in the future I’ll incorporate this method. In addition, the curve itself has an appreciable amount of thickness, thus there is no single value of pixel coordinate for each point on the curve. We can also add this uncertainty in the future.

I was able to finish this activity smoothly in around 2 hours, so I think I deserve to have a 5/5 on the correctness of the implementation of the steps. I had fun in making this first report and included visual guides in every step so I'll give myself a 5/5 on the quality of presentation. Overall, I did well and deserve to get a grade of 10 out of 10 for this activity.

I would like to thank Dr Maricor Soriano for her helpful suggestions. And also to my classmates Wynn Improso, Abby Jayin, and Chester Balingit for helping in the acquisition of the scanned image.

That's all for now, see you next time!



References
1. Soriano, M. Applied Physics 186 A2 - Digital Scanning activity handout
2. Domingo, Z. (1980) Computer simulation of the focusing properties of selected solar concentrators. Master of Science (Physics) Thesis, submitted to the College of Arts and Sciences, University of the Philippines Diliman.

Sunday, June 9, 2013

Introduction

I'm a 20-year old Physics major. Twenty years old and I'm stuck in my childhood. But in a positive way, I guess. Since I was a kid, I've wanted to study science and astronomy. And at the same time I've always wanted to become a Power Ranger. ALWAYS. That's why I named this blog, Astro Sentai.

In case you didn't know, the Power Rangers series came from the Japanese TV series Super Sentai. The term sentai was commonly used during the second world war, referring to small military units in Japan. Each year, a new Super Sentai arrives to protect the Earth from the forces of evil. Before the show Mighty Morphin' Power Rangers came to the Philippines, Filipinos avidly watched Bioman, Liveman, Fiveman, and Jetman. All of these were part of the Super Sentai franchise.

Stretching this introduction a bit longer, I'll tell you about the 35th Super Sentai, the Kaizoku Sentai Gokaiger. They can transform into whatever Super Sentai character that they want to, making them the greatest Super Sentai that ever existed. They're space pirates looking for the greatest treasure in the universe. As a Physicist and Astronomer, this, very much, is a summary of what I want to do someday. Hopefully I will be able to discover something cool. Or perhaps make the first Power Ranger system in real life.

But for now I'll be focusing on Image processing and write about it, as required by my Applied Physics 186 professor. I've already taken three elective classes as a part of the BS Physics curriculum. This class is actually an extra elective class, and I just wanted to have a class that would require me to make a blog. I could have just made a blog and not take this course, but then I won't have the drive to make a quality blog.

Anyways, from time to time you'll be reading astronomy related discussions and articles. I took this class to learn the techniques that I need to process my astro images. As an astrophotographer, I use different softwares to process and enhance my images. I might discuss these softwares in parallel with what I would be learning from this class and compare the advantage (and disadvantage) of processing images using a personal program.

During this semester, I'll be constructing an auto-guiding setup for my telescope. I'll discuss this sometime to give you an overview of what I'll be doing for the final project.

That would be all for now!