Tentative Solutions for HW5 -- Does NOT reflect the grading criteria -- -- Given just to help you study for final -- 1. a) It might change depending on the constant that we use. There are three cases: Sum (w_i * x_i) > t is changed to Sum ((c * w_i) * x_i) > c * t which is equivalent to c * Sum (w_i * x_1) > c t c > 0: Divide c by both sides, we get back the original equation. So behavior doesnt change. c = 0: Perceptron always returns false so behavior does change. c < 0: the inequality sign flips, therefore the perceptron behaves in the opposite way. I.e. if perc_before_change returned 0 for an input the new one will return 1 and vice versa. b) Again it might change depending on the constant: Sum (w_i * x_i) > t is changed to Sum ((c + w_i) * x_i) > c + t => Sum (c * x_i + w_i * x_i) > c + t => c * Sum (x_i) + Sum (w_i * x_i) > c + t If c = 0, then apparently the behavior doesnt change. If Sum(x_i) = 1, then we get the same behavior too but this does not apply in general. Therefore we can conclude that in the general case, the behavior does change since the equation we got is not equivalent to what we had before. 2. Exampl Input Threshld output 1 1 0 0 1 1 2 0 1 1 1 0 3 1 1 0 1 1 4 1 1 1 1 0 5 0 0 1 1 0 6 1 0 1 1 1 Start with weight vector Eg. Used Weights Output Error new weights 1 <0,0,0,0> 0 +1 <1,0,0,1> 2 <1,0,0,1> 1 -1 <1,-1,-1,0> 3 <1,-1,-1,0> 0 +1 <2,0,-1,1> 4 <2,0,-1,1> 1 -1 <1,-1,-2,0> 5 <1,-1,-2,0> 0 0 same 6 <1,-1,-2,0> 0 +1 <2,-1,-1,1> 1 <2,-1,-1,1> 1 0 same 2 <2,-1,-1,1> 0 0 same 3 <2,-1,-1,1> 1 0 same 4 <2,-1,-1,1> 1 -1 <1,-2,-2,0> 5 <1,-2,-2,0> 0 0 same 6 <1,-2,-2,0> 0 +1 <2,-2,-1,1> 1 <2,-2,-1,1> 1 0 same 2 <2,-2,-1,1> 0 0 same 3 <2,-2,-1,1> 1 0 same 4 <2,-2,-1,1> 0 0 same 5 <2,-2,-1,1> 0 0 same 6 <2,-2,-1,1> 1 0 same so we have converged with weights <2,-2,-1,1>. Perceptron was able to find a plane (2*(x1) - 2*(x2) - x3 + 1 > 0) that separates this three dimensional input space, therefore this function is indeed linearly separable. For drawing the sketch: Draw three axes of the input space and label them x1 x2 and x3.. Define a dot style (empty or filled) for output = 0 or output = 1 and draw the dots that represent the inputs.. For sketching the plane, you should find the x1 x2 and x3 intercepts.. so if your plane is: 2x1 - 2x2 - x3 + 1 = 0 then setting x1 = 0 and x2 = 0 gives x3 = 1 setting x2 = 0 and x3 = 0 gives x1 = -1/2 setting x1 = 0 and x3 = 0 gives x2 = 1/2 and use these points <0,0,1> <-.5,0,0> <0,.5,0> to sketch the plane.. 3. A perceptron's input can be either 0 or 1 as indicated in the lecture notes (0/1 signals).. but there are linear separators which operate on real-valued inputs (but we dont call them perceptrons :).. the people who answered the question thinking domain X is domain of reals will get full credit for this question.. so we have only four cases to check 0,0 A => 1 B => 0 1,0 A => 1 B => 1 0,1 A => 0 B => 0 1,1 A => 1 B => 1 whenever A => 1, B => 1 does not hold (0,0 case) so B is certainly not more general than A. but as you can see whenever B is 1, A is also 1 so A is more general than B.. therefore A is strictly_more_general than B.. 4. The same training set from the previous hw would do here.. See the solution to part C of the problem 1.. 5. The equations that a perceptron uses define a linear separator over the input domain. It is easy to see that because the weights represent the plane and the inequality defines the regions.. Any inputs that are not linearly separable cannot be represented that way, therefore a perceptron fails.. XOR is such a function: ^ | 1 0 | | 0----1-> ? You can also construct this argument: Assume there was a weight vector to represent XOR.. and a threshold t.. w1*x1 + w2*x2 + t > 0 gives the output of perceptron.. then, from the input values we get: t <= 0 (since <0,0> gives 0) w1 > -t (since w1 + t > 0) (from XOR(<1,0>) = 1) note that -t is positive w2 > -t (since w2 + t > 0) (from XOR(<0,1>) = 1) w1 + w2 <= -t (since XOR is 0 at <1,1>) Contradiction!! w1,w2 are positive weights, therefore w1 + w2 is at least -2t+2 such an inequality is not possible..