I worked and epsilon delta proof and got to a point where I knew it wasn't correct so then I just used trial and error to correctly pick the right delta, I will post my work, but I am really interested in the correct way to do problems similar to this.
Given: The limit of (2-(1/x)) as x approaches 1 is equal to 1. Prove that a delta exists for Epsilon equal to 0.1.
My Work:
By definition of delta: |x-1| < d
Property of equality: (1/x)*|x-1| < (1/x)*d
Distribute: |1-(1/x)| < d(1/x)
Substitution and commutative properties: |2-(1/x)-1| < d(1/x)
Here I realize that I have broken some rules of algebra so I just use trial and error to decide that the best value for x is (10/11).
Now, d(1/x) = d(11/10)
Substitution: d(11/10) = E
Prop. of Equality: d = E(10/11)
Sub. given info.: d = (1/10)*(10/11)
Now, d = (1/11) about 0.091 which is less than E = 0.1 and checks with the book.
How is this done the correct way?
Given: The limit of (2-(1/x)) as x approaches 1 is equal to 1. Prove that a delta exists for Epsilon equal to 0.1.
My Work:
By definition of delta: |x-1| < d
Property of equality: (1/x)*|x-1| < (1/x)*d
Distribute: |1-(1/x)| < d(1/x)
Substitution and commutative properties: |2-(1/x)-1| < d(1/x)
Here I realize that I have broken some rules of algebra so I just use trial and error to decide that the best value for x is (10/11).
Now, d(1/x) = d(11/10)
Substitution: d(11/10) = E
Prop. of Equality: d = E(10/11)
Sub. given info.: d = (1/10)*(10/11)
Now, d = (1/11) about 0.091 which is less than E = 0.1 and checks with the book.
How is this done the correct way?