Monday, December 3, 2012

Week 12: Eliminating states and deriving Regular Expressions and A3

This week we mainly learned about how to derive a Regular Expression from a DFSA, the process is interesting. It's amazing that after several steps of eliminating states when only the starting state and the final state left, we can easily derive a RE by just doing some combination. That's where I found the magic! The lecture material and the tutorial material are easy to understand this week.

Our final assignment (A3) was submitted this week. It was much harder than the assignments before. Mainly because it involves a lot of proofs and more open to creativity solutions as we were asked to write and prove our own algorithm. I like the creativity work, but I do hate to type all those symbols! And Microsoft Word is really awkward to draw DFSAs!

And this is the last week of our day section, since there are no lecture on the week (the virtual week). I finally realized that time flys so quickly. And I've nearly completed my first term at UofT. Now my opinions are: UofT is a good university, it pushes us so hard that we do learn a lot! And Danny is a really nice professor, so patient and kind!

Thank you professor, and thank you my classmates! Hope all of us will do good in final! Fighting!

Sunday, November 25, 2012

Week11: Regular expressions Proof and NFSAs

The week we first studied regular expressions, it looks like an extension of formal languages. Again, it's not hard, just treat them as special sets and do some sets operations. What I found interesting and new is  the process of proving a regular expression. I didn't expect it to be a mutual proof (we need to prove both L is the subset of L(R), and L(R) is a subset of L); I had thought that we only need to prove L(R) is a subset of L. Then I realized that I was wrong, simply proving one way doen't mean L is equal to L(R). But the process seems to be a little bit tricky and made me feel that it's just playing with English in some parts.

Then we started to draw some NFSAs. But by now, I still don't know what is the point of drawing NFSAs, since they can't tell anything meaningful to me. But the professor said there was some magic between DFSAs and NFSAs, but I didn't really catch what the magic is. :(    It seems that we'll discuss more on this magic next week.

Here are my lecture notes:


Sunday, November 18, 2012

Week 10: Formal Languages and FSAs

This week, we finally finished dealing with all those abstract concepts and starts to do something easier to understand to me -- the formal languages and FSAs. For the part of formal languages, I regard them as special sets. All we need to do is just some operations on sets. Since I did a lot of sets operations and analysis last term in my previous school, this part is just the second easiest part to me in this course (the easiest part is the early part about simple and complete inductions). Then The DFSAs are very similar to what we are leaning about transition matrix in STA247. It's just a process of drawing the transition matrix into pictures.

As a result of the long weekend, the pace of the day section and the night section switches. Thus we have to do our quizzes on new materials as soon as we finish our lectures since this week. The feeling is not good, we have to really concentrate on the lecture so that we can make it in the quiz. There's no time to review before quiz! (Fortunately, for this week's quiz, we just need to draw a DFSA, which is not hard.)

Here are my lecture and tutorial notes:


Monday, November 12, 2012

Week9: Partial Correctness & Term test 2

As expected, we discussed more on correctness this week. This time, the goal and the point of proving correctness are more clear to me after I studied the past term test 2, and there was a problem asking you to prove that a program terminates and the postconditions will be met (Though later I found out that it is a topic that we hadn't covered before this thursday). It makes sense that we should use some method and find some loop invariants to prove our program will meet the postcondition and terminate. It helps us to make sure that our program is correct and will terminate.

The part I didn't quite understand during the lecture is the process of deriving conditions from pictures. Because I was absent minded for a while, and when my mind came back, I couldn't figure out how the pictures were derived, namely I didn't know what the problem is! Then I asked my friend after lecture and reviewed the slides (I didn't find corresponding part in the textbook), I thin it's just another example of proving an iterative program, but expressing with some pictures to help understand.

We had out term test 2 this week. I had been worrying about writing Python code in the exam (because I didn't learn Python before), but it turned out that the test was easier than I had expected and didn't involve programming.

The lecture note for this week are as follows:



Saturday, November 3, 2012

Week 8: (Precondition + Termination) => Postcondition

I had a strange feeling after finishing this week's lecture: I did followed up with the pace of the lecture and understand what the profesor was proving, but when I think back of the lecture, I can't speak out what I learned. It's like that we first use our common sense to find some preconditions of a code, then we figure out what the code is doing and that would be our postcondition. Then the process of proving the correctness of the program just seems like we first assume the postcondition, then we just go through the program again while applying some induction to analyze the program.
But I just don't know what is the point of doing this, it seems like that we first analyze the program and get to know what it does, then we analyze the program again but using a more complicated method.

Maybe I need a discussion with someone or wait to see if the professor will tell more next week.

This week's tutorial seems different to me. We did some practice on writing divide and conquer algorithms, as I haven't learned Python before, my code just looked like a combination of C and Python :(  Fortunately, the grammer doen't matter. Then we did more practice on Time Complexity and Master Theorem, a little bit hard but was very helpful to understand the lecture materials.

Sunday, October 28, 2012

Week 7: Divide and Conquer & Master Theorem

Just as I guessed last week, we did go into further details of Master Theorem. But as I didn't review the lecture notes and the textbook this week, I fount it still hard for me to really understand the Master Theorem. But this time it is the TA that helped me a lot. I asked him about the master theorem after the tutorial, and he really caught my points and make me understand the master theorem in words! I was just confused about those abstract parameters for Master Theorem.

We also studied more on the divide and conquer method. At first I was confused again about the meaning of the term, thinking that it would be something new. But later I just realized that we have already used this method a lot. It just the philosophy of using method like recurrence to divide a big problem into several small problems and solve them separately then recombine the results.

By now, we have practiced a lot on time complicity, and I'm now quite familiar with the terms like upper bounds, big-oh etc,     :)

Practice makes perfect!

The lecture notes for this week is not well organized, thus I decide not to post it.

Friday, October 19, 2012

Week 6 : More Time Complexity, Unwinding and Master Theorem

We covered a lot of things in this week's lecture. But as I've made up the background knowledge that I was missing. I managed to catch up with the pace of the lecture (Again, thanks to Jason and Harry!).

First of all, we discussed more on Time Complexity on trying to calculate the upper bounds of recursive binary search. As we encountered trouble using the method we did to calculate lower bound, we applied another method, which to me is very similar to the former method, but just more tricky.

Then we learned to use Unwinding to analyze questions involving recursive definitions. Since I have done this a lot in my previous school (like analyzing Fibonacci Sequence), this is not hard to me. But I didn't quite catch up with how to prove the recurrence of binary search is non-decreasing. Then I read Lemma 3.6 in the textbook, it explains very clearly.

Later, we were introduced the Master Theorem, I found it really abstract and hard to understand. But since the professor didn't go into much detail, maybe this topic isn't that importan or we may go for the details later.

This week's lecture notes are as follows: