Computing discrete gradients


So how do we compute discrete gradients? Well, what we want is, we want to come up with a, an operator, a mask or a kernel. Remember this, what we looked at before. That effectively computes the discrete derivatives using cross-correlation. I’ve always been talking about this whole concept to find a difference, which basically is nothing else but a numerical solution where basically we’d solve a differential equation using an approximation of derivatives. And in this case we basically come up with differences from one column to the other, or one row to the other. So what we are interested in, and we did this previously, is come up with a simple kernel, that when applied to an original image, in a cross-correlation framework, could be used to, generate a derivative. How do we do this for discrete, images? Well we want to be able to compute discrete gradients. We want to be looking for gradients in x and y direction. Again, remember, this was my equation looking for a change in x and y. In this case y is going to remain the same because I’m going column by column. In this case, I will be going, row by row and keep the same column. So, kinds of kernels we can generate to help assist with this are, for example this one. Where basically now I have zeros here, minus one, one, zero and zero. Of course I use this in a cross correlation framework. I’ll be able to now generate a del x, our gradient change in x image. So those of you who are thinking how would I get the Hy, well I would actually just transpose this. Now again, so far we’re talking about computing gradients using cross correlation. We’re basically using Hx and Hy as the two kernels. Those of you who remember our conversation about cross correlation and convolution should be, be able to easily kind of predict how would we use the method of convolution here. And what we would have to do, the Hx and Hy, to be able to actually use the convolution process rather than the cross correlation process. because remember, this is not a, symmetric in both an x and y kernel. This symmetric in one axis but not both. So, ideally, a kernel should have some symmetry about an image point. And, overall, by combining these two, you might actually notice there is overall symmetry. So one question remains. Is where is the middle point of these kernels? Remember the processing we have actually done before where we have to look at and create a nine by nine kernel and then use that to kind of place the middle value. In this one, of course, there is no middle term here. So depending on how the operations are done, sometimes you would actually always have, have an image middle point here, or also kind always keep an offset. And I’ll put an image point here. This does have an im, small impact. And we looked at it when we did simple differences from moving around the row. You will always be losing one column if I’m going around columns and if I’m coming down on rows you would lose one, row. So this is kind of way basically we will be looking at for information. But of course, you will also notice that sometimes we want to find kernels which allow you to have both symmetry and also, have a much well defined midpoint. For example, this is one kernel that is widely used. This should remind you of all of the other kernels we have looked, looked at for doing averaging. Here, basically what it’s doing is taking the average of the information one column to the right and one column to the left. And of course, zeroing out everything else. So it’s basically, an average of the left and the right derivatives. So is this a better kernel? Well, it does have a some features because it does have well defined midpoint. And also can actually, be used for doing various types of symmetry calculations. Transposing this, we can get the same kind of kernel in the y direction

Leave a Reply

Your email address will not be published. Required fields are marked *