samedi 31 mai 2014

TranslateApiException: The Azure Market Place Translator Subscription associated with the request credentials has zero balance. : ID=3541.V2_Json.Translate.4EB8E665


Is there a way of doing deconvolution with OpenCV?


I'm just impressed by the improvement shown here


http://www.olympusmicro.com/primer/digitalimaging/deconvolution/images/deconalgorithmsfigure1.jpg


and would like to add this feature also to my software.


EDIT (Additional information for bounty.)


I still have not figured out how to implement the deconvolution. This code helps me to sharpen the image, but I think the deconvolution could do it better.


void ImageProcessing::sharpen(QImage & img)
{
IplImage* cvimg = createGreyFromQImage( img );
if ( !cvimg ) return;

IplImage* gsimg = cvCloneImage(cvimg );
IplImage* dimg = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 1 );
IplImage* outgreen = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 3 );
IplImage* zeroChan = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 1 );
cvZero(zeroChan);

cv::Mat smat( gsimg, false );
cv::Mat dmat( dimg, false );

cv::GaussianBlur(smat, dmat, cv::Size(0, 0), 3);
cv::addWeighted(smat, 1.5, dmat, -0.5 ,0, dmat);
cvMerge( zeroChan, dimg, zeroChan, NULL, outgreen);

img = IplImage2QImage( outgreen );
cvReleaseImage( &gsimg );
cvReleaseImage( &cvimg );
cvReleaseImage( &dimg );
cvReleaseImage( &outgreen );
cvReleaseImage( &zeroChan );
}

Hoping for helpful hints!




Sure, you can write a deconvolution Code using OpenCV. But there are no ready to use Functions (yet).


To get started you can look at this Example that shows the implementation of Wiener Deconvolution in Python using OpenCV.


Here is another Example using C, but this is from 2012, so maybe it is outdated.




Nearest neighbor deconvolution is a technique which is used typically on a stack of images in the Z plane in optical microscopy. This review paper: Jean-Baptiste Sibarita. Deconvolution Microscopy. Adv Biochem Engin/Biotechnol (2005) 95: 201–243 covers quite a lot of the techniques used, including the one you are interested in. This is also a nice intro: http://blogs.fe.up.pt/BioinformaticsTools/microscopy/


This numpy+scipy python example shows how it works:


from pylab import *
import numpy
import scipy.ndimage

width = 100
height = 100
depth = 10
imgs = zeros((height, width, depth))

# prepare test input, a stack of images which is zero except for a point which has been blurred by a 3D gaussian
#sigma = 3
#imgs[height/2,width/2,depth/2] = 1
#imgs = scipy.ndimage.filters.gaussian_filter(imgs, sigma)

# read real input from stack of images img_0000.png, img_0001.png, ... (total number = depth)
# these must have the same dimensions equal to width x height above
# if imread reads them as having more than one channel, they need to be converted to one channel
for k in range(depth):
imgs[:,:,k] = scipy.ndimage.imread( "img_%04d.png" % (k) )

# prepare output array, top and bottom image in stack don't get filtered
out_imgs = zeros_like(imgs)
out_imgs[:,:,0] = imgs[:,:,0]
out_imgs[:,:,-1] = imgs[:,:,-1]

# apply nearest neighbor deconvolution
alpha = 0.4 # adjustabe parameter, strength of filter
sigma_estimate = 3 # estimate, just happens to be same as the actual

for k in range(1, depth-1):
# subtract blurred neighboring planes in the stack from current plane
# doesn't have to be gaussian, any other kind of blur may be used: this should approximate PSF
out_imgs[:,:,k] = (1+alpha) * imgs[:,:,k] \
- (alpha/2) * scipy.ndimage.filters.gaussian_filter(imgs[:,:,k-1], sigma_estimate) \
- (alpha/2) * scipy.ndimage.filters.gaussian_filter(imgs[:,:,k+1], sigma_estimate)

# show result, original on left, filtered on right
compare_img = copy(out_imgs[:,:,depth/2])
compare_img[:,:width/2] = imgs[:,:width/2,depth/2]
imshow(compare_img)
show()



I'm not sure you understand what deconvolution is. The idea behind deconvolution is to remove the detector response from the image. This is commonly done in astronomy.


For instance, if you have a CCD mounted to a telescope, then any image you take is a convolution of what you are looking at in the sky and the response of the optical system. The telescope (or camera lens or whatever) will have some point spread function (PSF). That is, if you look at a point source that is very far away, like a star, when you take an image of it, the star will be blurred over several pixels. This blurring -- the point spread -- is what you would like to remove. If you know the point spread function of your optical system very well, then you can deconvolve the PSF from your image and obtain a sharper image.


Unless you happen to know the PSF of your optics (nontrivial to measure!), you should seek out some other option for sharpening your image. I doubt OpenCV has anything like a Richardson-Lucy algorithm built-in.



Is there a way of doing deconvolution with OpenCV?


I'm just impressed by the improvement shown here


http://www.olympusmicro.com/primer/digitalimaging/deconvolution/images/deconalgorithmsfigure1.jpg


and would like to add this feature also to my software.


EDIT (Additional information for bounty.)


I still have not figured out how to implement the deconvolution. This code helps me to sharpen the image, but I think the deconvolution could do it better.


void ImageProcessing::sharpen(QImage & img)
{
IplImage* cvimg = createGreyFromQImage( img );
if ( !cvimg ) return;

IplImage* gsimg = cvCloneImage(cvimg );
IplImage* dimg = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 1 );
IplImage* outgreen = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 3 );
IplImage* zeroChan = cvCreateImage( cvGetSize(cvimg), IPL_DEPTH_8U, 1 );
cvZero(zeroChan);

cv::Mat smat( gsimg, false );
cv::Mat dmat( dimg, false );

cv::GaussianBlur(smat, dmat, cv::Size(0, 0), 3);
cv::addWeighted(smat, 1.5, dmat, -0.5 ,0, dmat);
cvMerge( zeroChan, dimg, zeroChan, NULL, outgreen);

img = IplImage2QImage( outgreen );
cvReleaseImage( &gsimg );
cvReleaseImage( &cvimg );
cvReleaseImage( &dimg );
cvReleaseImage( &outgreen );
cvReleaseImage( &zeroChan );
}

Hoping for helpful hints!



Sure, you can write a deconvolution Code using OpenCV. But there are no ready to use Functions (yet).


To get started you can look at this Example that shows the implementation of Wiener Deconvolution in Python using OpenCV.


Here is another Example using C, but this is from 2012, so maybe it is outdated.



Nearest neighbor deconvolution is a technique which is used typically on a stack of images in the Z plane in optical microscopy. This review paper: Jean-Baptiste Sibarita. Deconvolution Microscopy. Adv Biochem Engin/Biotechnol (2005) 95: 201–243 covers quite a lot of the techniques used, including the one you are interested in. This is also a nice intro: http://blogs.fe.up.pt/BioinformaticsTools/microscopy/


This numpy+scipy python example shows how it works:


from pylab import *
import numpy
import scipy.ndimage

width = 100
height = 100
depth = 10
imgs = zeros((height, width, depth))

# prepare test input, a stack of images which is zero except for a point which has been blurred by a 3D gaussian
#sigma = 3
#imgs[height/2,width/2,depth/2] = 1
#imgs = scipy.ndimage.filters.gaussian_filter(imgs, sigma)

# read real input from stack of images img_0000.png, img_0001.png, ... (total number = depth)
# these must have the same dimensions equal to width x height above
# if imread reads them as having more than one channel, they need to be converted to one channel
for k in range(depth):
imgs[:,:,k] = scipy.ndimage.imread( "img_%04d.png" % (k) )

# prepare output array, top and bottom image in stack don't get filtered
out_imgs = zeros_like(imgs)
out_imgs[:,:,0] = imgs[:,:,0]
out_imgs[:,:,-1] = imgs[:,:,-1]

# apply nearest neighbor deconvolution
alpha = 0.4 # adjustabe parameter, strength of filter
sigma_estimate = 3 # estimate, just happens to be same as the actual

for k in range(1, depth-1):
# subtract blurred neighboring planes in the stack from current plane
# doesn't have to be gaussian, any other kind of blur may be used: this should approximate PSF
out_imgs[:,:,k] = (1+alpha) * imgs[:,:,k] \
- (alpha/2) * scipy.ndimage.filters.gaussian_filter(imgs[:,:,k-1], sigma_estimate) \
- (alpha/2) * scipy.ndimage.filters.gaussian_filter(imgs[:,:,k+1], sigma_estimate)

# show result, original on left, filtered on right
compare_img = copy(out_imgs[:,:,depth/2])
compare_img[:,:width/2] = imgs[:,:width/2,depth/2]
imshow(compare_img)
show()


I'm not sure you understand what deconvolution is. The idea behind deconvolution is to remove the detector response from the image. This is commonly done in astronomy.


For instance, if you have a CCD mounted to a telescope, then any image you take is a convolution of what you are looking at in the sky and the response of the optical system. The telescope (or camera lens or whatever) will have some point spread function (PSF). That is, if you look at a point source that is very far away, like a star, when you take an image of it, the star will be blurred over several pixels. This blurring -- the point spread -- is what you would like to remove. If you know the point spread function of your optical system very well, then you can deconvolve the PSF from your image and obtain a sharper image.


Unless you happen to know the PSF of your optics (nontrivial to measure!), you should seek out some other option for sharpening your image. I doubt OpenCV has anything like a Richardson-Lucy algorithm built-in.


Related Posts:

0 commentaires:

Enregistrer un commentaire