diff --git a/_toc.yml b/_toc.yml index c93a2ee..fa074f6 100644 --- a/_toc.yml +++ b/_toc.yml @@ -100,8 +100,21 @@ parts: - file: notebooks/photometry/01.03-Local-peak-detection.ipynb title: Local peak detection sections: + - file: notebooks/photometry/01.03.01-peak_detection-simulated-image.ipynb + title: Overview with simulated image - file: notebooks/photometry/01.03.02-peak_detection-XDF.ipynb title: Peak detection in the XDF + - file: notebooks/photometry/01.03.03-peak_detection-Feder.ipynb + title: Peak detection in a ground-based stellar image + - file: notebooks/photometry/01.04-image-segmentation.ipynb + title: Image segmentation + sections: + - file: notebooks/photometry/01.04.01-image-segmentation-simulated-image.ipynb + title: Overview with simulated image + - file: notebooks/photometry/01.04.02-image-segmentation-XDF.ipynb + title: Image segmentation in the XDF + - file: notebooks/photometry/01.04.03-image-segmentation-Feder.ipynb + title: Image segmentation in a ground-based stellar image # - url: https://github.com/mwcraig/ccd-reduction-and-photometry-guide diff --git a/notebooks/photometry/00.00-Preface.ipynb b/notebooks/photometry/00.00-Preface.ipynb index 3991afd..b814cdb 100644 --- a/notebooks/photometry/00.00-Preface.ipynb +++ b/notebooks/photometry/00.00-Preface.ipynb @@ -22,19 +22,14 @@ "```{note}\n", "Several notebooks in this part of the guide were origninally developed by Lauren CHambers and Erik Tollerud with support from the Community Software Initiative at the Space Telescope Science Institute, the Astropy Project and ScienceBetter Consulting. Those notebooks are:\n", "\n", - "+ A notebook\n", - "+ Another notebook \n", + "+ [Handling masking before background removal](01.01.02-Background-estimation-XDF.ipynb)\n", + "+ [Detecting sources in the XDF](01.02.02-IRAF-like-source-detection-XDF.ipynb) \n", + "+ [Peak finding in the XDF](01.03.02-peak_detection-XDF.ipynb)\n", + "+ [Image segmentation in the XDF](01.04.02-image-segmentation-XDF)\n", "\n", - "\"STScI\n", + "\"STScI\n", "```" ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { @@ -53,7 +48,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.9" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.00-Source-detection.ipynb b/notebooks/photometry/01.00-Source-detection.ipynb index ef547f4..b129d56 100644 --- a/notebooks/photometry/01.00-Source-detection.ipynb +++ b/notebooks/photometry/01.00-Source-detection.ipynb @@ -15,7 +15,7 @@ "\n", "+ Find sources in your image and perform photometry on every source you can detect in the image.\n", "+ Perform photometry on objects from a catalog (either one you have created or from some other source); this presumes there is WCS information in the images so that catalog RA/Dec can be translated to pixel position on the image.\n", - "+ Stack your images, detect the sources in that much deeper stacked image, treat those sources as your catalog and use that catlog to erform photometry." + "+ Stack your images, detect the sources in that much deeper stacked image, treat those sources as your catalog and use that catlog to perform photometry." ] }, { @@ -46,7 +46,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.4" + "version": "3.11.8" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.01-Background-removal.ipynb b/notebooks/photometry/01.01-Background-removal.ipynb index 13ed6be..c1bbe23 100644 --- a/notebooks/photometry/01.01-Background-removal.ipynb +++ b/notebooks/photometry/01.01-Background-removal.ipynb @@ -11,7 +11,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The first step in detecting sources is to remove any background from the image. There are a number of ways to do this, ranging from subtracting a single value (typically the median or mean) from every pixel to more sophisticated methods either fit a slowly-varying shape to the background or that smooth the input image in some way.\n", + "The first step in detecting sources is to remove any background from the image. There are a number of ways to do this, ranging from subtracting a single value (typically the median or mean) from every pixel to more sophisticated methods that either fit a slowly-varying shape to the background or that smooth the input image in some way.\n", "\n", "There is probably not a single \"best\" way to do this; what matters is whether you are able to detect the sources you need to detect in the images you have.\n", "\n", @@ -35,7 +35,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.8" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.01.01-Background-removal-simulated-data.ipynb b/notebooks/photometry/01.01.01-Background-removal-simulated-data.ipynb index fac2bad..0e67a63 100644 --- a/notebooks/photometry/01.01.01-Background-removal-simulated-data.ipynb +++ b/notebooks/photometry/01.01.01-Background-removal-simulated-data.ipynb @@ -11,11 +11,31 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The first step in detecting sources is to remove any background from the image. There are a number of ways to do this, ranging from subtracting a single value (typically the median or mean) from every pixel to more sophisticated methods either fit a slowly-varying shape to the background or that smooth the input image in some way.\n", + "The first step in detecting sources is to remove any background from the image. This notebook walks through a couple of ways to do that: using a single value for the background or constructing a two dimensional background image.\n", "\n", - "There is probably not a single \"best\" way to do this; what matters is whether you are able to detect the sources you need to detect in the images you have.\n", + "The imports for this notebook are in the cell below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "%matplotlib inline\n", + "import matplotlib.pyplot as plt\n", + "\n", + "from photutils.background import Background2D, MeanBackground, MADStdBackgroundRMS\n", + "from photutils.datasets import make_100gaussians_image\n", + "from photutils.segmentation import detect_threshold, detect_sources\n", + "from photutils.utils import circular_footprint\n", "\n", - "Note that the way you do background removal for source detection does *not* need to be the same way you do it for photometry. It is perfectly fine to use the same method for both, of course, but it is not required." + "from astropy.visualization import SqrtStretch\n", + "from astropy.visualization.mpl_normalize import ImageNormalize\n", + "from astropy.visualization import hist\n", + "from astropy.stats import sigma_clipped_stats, SigmaClip" ] }, { @@ -39,9 +59,11 @@ "\n", "We begin by constructing a histogram of pixel values in a sample image. The code below illustrates a few useful things from astropy and photutils:\n", "\n", - "+ photutils comes with useful simulated images.\n", - "+ astropy comes with a histogram function that will automatically bin the data in a way that brings out features in your data that uniform-width bins might miss.\n", - "+ astropy has several stretches and normalizations to bring out features in astronomical images.\n" + "+ [`photutils`](https://photutils.readthedocs.io/en/stable/) comes with useful simulated images.\n", + "+ [`astropy`](https://docs.astropy.org/en/stable/) comes with a histogram function that will automatically bin the data in a way that brings out features in your data that uniform-width bins might miss.\n", + "+ [`astropy`](https://docs.astropy.org/en/stable/) has several stretches and normalizations to bring out features in astronomical images.\n", + "\n", + "The simulated image we will work with in this section is below.\n" ] }, { @@ -50,34 +72,21 @@ "metadata": {}, "outputs": [], "source": [ - "import numpy as np\n", - "\n", - "%matplotlib inline\n", - "import matplotlib.pyplot as plt\n", - "\n", - "from photutils.background import Background2D, MeanBackground, MADStdBackgroundRMS\n", - "from photutils.datasets import make_100gaussians_image\n", - "from photutils.segmentation import detect_threshold, detect_sources\n", - "from photutils.utils import circular_footprint\n", - "\n", - "from astropy.visualization import SqrtStretch\n", - "from astropy.visualization.mpl_normalize import ImageNormalize\n", - "from astropy.visualization import hist\n", - "from astropy.stats import sigma_clipped_stats, SigmaClip\n", - "\n", "data = make_100gaussians_image()\n", "\n", + "# Create an image stretch that will be applied when we display the image.\n", "norm = ImageNormalize(stretch=SqrtStretch())\n", - "plt.imshow(data, norm=norm, origin='lower', cmap='Greys_r')\n" + "\n", + "plt.figure(figsize=(10, 5))\n", + "plt.imshow(data, norm=norm, origin='lower', cmap='Greys_r')\n", + "plt.title(\"Simulated image with 100 Gaussian sources\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "There are a couple of things to note about this distrubtion. \n", - "\n", - "The bulk of the pixel values are noise. The function [`make_100gaussians_image`](https://photutils.readthedocs.io/en/stable/api/photutils.datasets.make_100gaussians_image.html#photutils.datasets.make_100gaussians_image) adds a background with a Gaussian distribution centered at 5 with a standard deviation of 2. The long tail at higher pixel values are from the sources in the image." + "Next we plot a histogram of the pixel values in the image. As suggested at the beginning of this section, the distribution is not symmetric." ] }, { @@ -95,17 +104,33 @@ " alpha=0.5, label='Pixel values')\n", "plt.xlabel('Pixel value')\n", "plt.ylabel('Number of pixels')\n", - "# The semi-colon supresses a display of the grid object representation\n", + "plt.title(\"Histogram of pixel values\")\n", + "# The semi-colon suppresses a display of the grid object representation\n", "plt.grid();" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are a couple of things to note about this distribution. \n", + "\n", + "The bulk of the pixel values are noise. The function [`make_100gaussians_image`](https://photutils.readthedocs.io/en/stable/api/photutils.datasets.make_100gaussians_image.html#photutils.datasets.make_100gaussians_image) adds a background with a Gaussian distribution centered at 5 with a standard deviation of 2. The long tail at higher pixel values are from the sources in the image." + ] + }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Estimating the background value\n", "\n", - "The plot below shows the result of several ways of estimating the background from the data. None of these straightforward estimates give a good measure of the true background level, indicated by the green line." + "The plot below shows the result of several ways of estimating the background from the data:\n", + "\n", + "+ mean of all of the pixels\n", + "+ median of all of the values\n", + "+ a sigma-clipped median of all of the values.\n", + "\n", + "None of these straightforward estimates give a good measure of the true background level, which is indicated by the green line." ] }, { @@ -133,6 +158,7 @@ "plt.axvline(clipped_med, label='Sigma-clipped median', color='yellow')\n", "plt.xlabel('Pixel value')\n", "plt.ylabel('Number of pixels')\n", + "plt.title(\"Methods of estimating background value\")\n", "plt.legend();" ] }, @@ -223,6 +249,13 @@ "source_mask = segment_img.make_source_mask(footprint=footprint)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now add the sigma-clipped median of just the pixels without a source in them." + ] + }, { "cell_type": "code", "execution_count": null, @@ -240,8 +273,9 @@ "plt.axvline(avg_all_pixels, label='Mean of all values', color='red')\n", "plt.axvline(median_all_pixels, label='Median of all values', color='cyan')\n", "plt.axvline(clipped_med, label='Sigma-clipped median', color='yellow')\n", - "plt.axvline(mask_clip_med, label='Sigma-clipped, sources masked',\n", + "plt.axvline(mask_clip_med, label='Sigma-clipped median, sources masked',\n", " color='violet', linestyle='dashed')\n", + "plt.title(\"Methods of estimating background value\")\n", "plt.legend();" ] }, @@ -295,7 +329,8 @@ "data2 = data + gradient\n", "\n", "plt.figure(figsize=(10, 5))\n", - "plt.imshow(data2, norm=norm, origin='lower', cmap='Greys_r')" + "plt.imshow(data2, norm=norm, origin='lower', cmap='Greys_r')\n", + "plt.title(\"Simulated image with bakground gradient\");" ] }, { @@ -363,7 +398,8 @@ "outputs": [], "source": [ "plt.figure(figsize=(10, 5))\n", - "plt.imshow(bgd.background, norm=norm, origin='lower', cmap='Greys_r')" + "plt.imshow(bgd.background, norm=norm, origin='lower', cmap='Greys_r')\n", + "plt.title(\"Calculated image background\");" ] }, { @@ -383,10 +419,9 @@ "source": [ "back_sub_data = data2 - bgd.background\n", "\n", - "sub_mean, sub_med, sub_std = sigma_clipped_stats(back_sub_data,\n", - " sigma=3)\n", + "sub_mean, sub_med, sub_std = sigma_clipped_stats(back_sub_data, sigma=3)\n", "\n", - "print(sub_mean, sub_med, sub_std)" + "print(f\"Mean: {sub_mean}, Median: {sub_med}, Standard deviation: {sub_std}\")" ] }, { @@ -415,7 +450,7 @@ "source": [ "## Summary\n", "\n", - "An image needs to be background-subtracted before you detect the sources in the image. That background can be represented by either a single number (e.g. the mean or median) or by a two-dimensional image. In both cases the best estimate of the background is obtained when sources are masked out before estimating the background. Though this means doing two rounds of source detection, one to estimate the background and one for the actual source detection, the method of finding sources used above is reasonably fast.\n", + "An image needs to be background-subtracted before you detect the sources in the image. That background can be represented by either a single number, e.g. the mean or median, or by a two-dimensional image. In both cases the best estimate of the background is obtained when sources are masked out before estimating the background. Though this means doing two rounds of source detection, one to estimate the background and one for the actual source detection, the method of finding sources used above is reasonably fast.\n", "\n", "Next, we will look at how to do background estimation for a couple of actual science images to get a more nuanced understanding of the choices in background subtraction." ] @@ -437,7 +472,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.8" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.01.02-Background-estimation-XDF.ipynb b/notebooks/photometry/01.01.02-Background-estimation-XDF.ipynb index 688806a..4b1a41a 100644 --- a/notebooks/photometry/01.01.02-Background-estimation-XDF.ipynb +++ b/notebooks/photometry/01.01.02-Background-estimation-XDF.ipynb @@ -106,7 +106,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's look at the data, including the background we've added. The cell below sets up an image normaliztaion we will use for all of our later views of the image. It also defines a small function, `format_colorbar` that is used to set up the colorbar in the subsequent plots." + "Let's look at the data, including the background we've added. The cell below sets up an image normaliztaion we will use for all of our later views of the image. It also defines a small function, `format_colorbar`, that is used to set up the colorbar in the subsequent plots." ] }, { @@ -281,7 +281,7 @@ "source": [ "### Mask sources, then calculate scalar background\n", "\n", - "As discussed in [FILL IN LINK](FILL IN LINK), the most accurate estimate of the scalar background is obtained when the sources are masked first. The cell below does that procedure." + "As discussed in [the section on background estimation with a simulated image](01.01.01-Background-removal-simulated-data.ipynb), the most accurate estimate of the scalar background is obtained when the sources are masked first. The cell below does that procedure." ] }, { @@ -441,14 +441,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `Background2D` class in `photutils` allows users to model 2-dimensional backgrounds, by calculating the mean or median in small boxes, and smoothing these boxes to reconstruct a continuous 2D background. The class includes the following arguments/attributes:\n", + "The [`Background2D`](https://photutils.readthedocs.io/en/stable/api/photutils.background.Background2D.html#photutils.background.Background2D) class in [`photutils`](https://photutils.readthedocs.io/en/stable/index.html) allows users to model 2-dimensional backgrounds, by calculating the mean or median in small boxes, and smoothing these boxes to reconstruct a continuous 2D background. The class includes the following arguments/attributes:\n", "* **`box_size`** — the size of the boxes used to calculate the background. This should be larger than individual sources, yet still small enough to encompass changes in the background.\n", "* **`filter_size`** — the size of the median filter used to smooth the final 2D background. The dimension should be odd along both axes.\n", "* **`filter_threshold`** — threshold below which the smoothing median filter will not be applied.\n", - "* **`sigma_clip`** — an ` astropy.stats.SigmaClip` object that is used to specify the sigma and number of iterations used to sigma-clip the data before background calculations are performed.\n", + "* **`sigma_clip`** — an [`astropy.stats.SigmaClip`](https://docs.astropy.org/en/stable/api/astropy.stats.SigmaClip.html#astropy.stats.SigmaClip) object that is used to specify the sigma and number of iterations used to sigma-clip the data before background calculations are performed.\n", "* **`bkg_estimator`** — the method used to perform the background calculation in each box (mean, median, SExtractor algorithm, etc.).\n", "\n", - "For this example, we will use the `MeanBackground` estimator. Note though, that there is little or no spatial variation apparent in the background of this image." + "For this example, we will use the [`MeanBackground`](https://photutils.readthedocs.io/en/stable/api/photutils.background.MeanBackground.html#photutils.background.MeanBackground) estimator. Note though, that there is little or no spatial variation apparent in the background of this image." ] }, { @@ -592,7 +592,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.8" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.01.03-Background-estimation-FEDER.ipynb b/notebooks/photometry/01.01.03-Background-estimation-FEDER.ipynb index cae66a7..f9ad025 100644 --- a/notebooks/photometry/01.01.03-Background-estimation-FEDER.ipynb +++ b/notebooks/photometry/01.01.03-Background-estimation-FEDER.ipynb @@ -150,9 +150,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Now that the data are properly masked, we can calculate some basic statistical values to do a scalar estimation of the image background. \n", - "\n", - "By \"scalar estimation\", we mean the calculation of a single value (such as the mean or median) to represent the value of the background for our entire two-dimensional dataset. This is in contrast to a two-dimensional background, where the estimated background is represented as an array of values that can vary spatially with the dataset. We will calculate a 2D background in the upcoming section.\n", + "Now that the data are properly masked, we can calculate some basic statistical values to do a scalar estimation of the image background, as we did in the previous section.\n", "\n", "### Calculate scalar background value\n", "\n", @@ -231,7 +229,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "But what difference does this sigma clipping make? And how important is masking, anyway? Let's visualize these statistics to get an idea:" + "What difference do sigma clipping and masking make in this case? " ] }, { @@ -270,7 +268,7 @@ "\n", "\n", "ax2.set_xlim(160, 170) # flux_range)\n", - "ax2.set_xlabel(r'Flux Count Rate ({})'.format(ccd_image.unit.to_string('latex')), fontsize=14)\n", + "ax2.set_xlabel(r'Counts ({})'.format(ccd_image.unit.to_string('latex')), fontsize=14)\n", "ax2.set_title('Median pixel value', fontsize=16)\n", "\n", "# Add legend\n", @@ -281,7 +279,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The contrast with the previous section that used XDF image is striking. In this case msakng makes no difference because there were no non-data areas to mask. Signma clipping makes a much larger difference than in the XDF case because of the presence of a number of relatively bright stars in this image.\n", + "The contrast with the previous section that used XDF image is striking. In this case maskng makes no difference because there were no non-data areas to mask. Sigma clipping makes a much larger difference than in the XDF case because of the presence of a number of relatively bright stars in this image.\n", "\n", "Once sigma clipping has been done it makes little difference whether sources are also masked. \n", "\n", @@ -343,7 +341,7 @@ "source": [ "The result here may look surprising at first: background subtraction seems to have made everything much fainter. That is an artifact of using the same image normalization for both images even though the range of their data values is now very different.\n", "\n", - "In the image on the left the average data value is roughly `165 adu`. In the image on the right the average value is roughly `0 adu` by design, since we subtracted the mean of the first image from every pixel.\n", + "In the image on the left, the average data value is roughly `165 adu`. In the image on the right, the average value is roughly `0 adu` by design, since we subtracted the mean of the first image from every pixel.\n", "\n", "If we redo the plot but use a different image normalization for each it becomes clearer that the scalar background subtraction has one effect: moving the mean pixel value in the image." ] @@ -387,14 +385,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `Background2D` class allows users to model 2-dimensional backgrounds, by calculating the mean or median in small boxes, and smoothing these boxes to reconstruct a continuous 2D background. The class includes the following arguments/attributes:\n", - "* **`box_size`** — the size of the boxes used to calculate the background. This should be larger than individual sources, yet still small enough to encompass changes in the background.\n", - "* **`filter_size`** — the size of the median filter used to smooth the final 2D background. The dimension should be odd along both axes.\n", - "* **`filter_threshold`** — threshold below which the smoothing median filter will not be applied.\n", - "* **`sigma_clip`** — an ` astropy.stats.SigmaClip` object that is used to specify the sigma and number of iterations used to sigma-clip the data before background calculations are performed.\n", - "* **`bkg_estimator`** — the method used to perform the background calculation in each box (mean, median, SExtractor algorithm, etc.).\n", + "The [`Background2D`](https://photutils.readthedocs.io/en/stable/api/photutils.background.Background2D.html#photutils.background.Background2D) class in [`photutils`](https://photutils.readthedocs.io/en/stable/index.html) allows users to model 2-dimensional backgrounds, by calculating the mean or median in small boxes, and smoothing these boxes to reconstruct a continuous 2D background. As in the previous notebook, we will use the [`MeanBackground`](https://photutils.readthedocs.io/en/stable/api/photutils.background.MeanBackground.html#photutils.background.MeanBackground) estimator. \n", "\n", - "For this example, we will use the `MeanBackground` estimator." + "Here we expect that the background image will be useful since there is clearly a gradient in the image." ] }, { @@ -441,6 +434,13 @@ "ax1.set_title('2D Estimated Background');" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As expected, the gradient is clearly visible in the background. Next, we subtract that two-dimensional background and look at the result." + ] + }, { "cell_type": "code", "execution_count": null, @@ -485,7 +485,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Note how much more even the 2D background-subtracted image looks; especially the difference between these two images in the bottom left and top right corners. This makes sense, as the background that `Background2D` identified was a gradient from the bottom left corner to the top right corner." + "Note how much more even the 2D background-subtracted image looks; the difference between these two images is especially apparent in the bottom left and top right corners. This makes sense, as the background that [`Background2D`](https://photutils.readthedocs.io/en/stable/api/photutils.background.Background2D.html#photutils.background.Background2D) identified was a gradient from the bottom left corner to the top right corner." ] } ], @@ -505,7 +505,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.7" + "version": "3.11.8" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.02.01-IRAF-like-simulated-image.ipynb b/notebooks/photometry/01.02.01-IRAF-like-simulated-image.ipynb index 6e1da14..3115e44 100644 --- a/notebooks/photometry/01.02.01-IRAF-like-simulated-image.ipynb +++ b/notebooks/photometry/01.02.01-IRAF-like-simulated-image.ipynb @@ -77,7 +77,7 @@ "data = make_100gaussians_image()\n", "norm = ImageNormalize(stretch=SqrtStretch())\n", "plt.figure(figsize=(10, 5))\n", - "plt.imshow(data, cmap='Greys', origin='lower', norm=norm,\n", + "plt.imshow(data, cmap='Greys_r', origin='lower', norm=norm,\n", " interpolation='nearest')\n", "plt.grid()" ] @@ -159,7 +159,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "A plot showing the location of all of the detected sources helps illustrate why some of sources were missed, To display where sources were detected, we create a a set of photutils circular apertures and use its plot method to add a circle over each detected source." + "A plot showing the location of all of the detected sources helps illustrate why some of sources were missed. To display where sources were detected, we create a a set of [`photutils`](https://photutils.readthedocs.io/en/stable/) circular apertures and use its plot method to add a circle over each detected source." ] }, { @@ -238,7 +238,14 @@ "\n", "+ the shape of the marker matches the minor-to-major axis ratio\n", "+ the major axis of the marker is the FWHM of the major axis\n", - "+ the thickness of the merker edge indicates the amplitude of the source.\n" + "+ the thickness of the marker edge indicates the amplitude of the source.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, we overlay the sources that were detected." ] }, { @@ -318,22 +325,15 @@ "In some cases [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) finds *two* sources where there is really one. Sources 5 and 6 are a good example of this." ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### " - ] - }, { "cell_type": "markdown", "metadata": {}, "source": [ "### An exploration of some parameters of [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) \n", "\n", - "We can test some of the assertions in the previous paragraph. As a couple of example of that, we can try to\n", + "We can test some of the assertions in the previous paragraph. A couple of examples of that are to try\n", "\n", - "1. A lowe4r threshold in detection to try to detect the fainter sources in this test image. Below we will try a threshold of 3 times the standard deviation of the background instead of 5 times.\n", + "1. A lower threshold in detection to try to detect the fainter sources in this test image. Below we will try a threshold of 3 times the standard deviation of the background instead of 5 times.\n", "2. A ratio smaller than 1 to detect the more elongated sources. The ratio argument is the ratio of the semiminor to semimajor axis. Below we will try a ratio of 0.33.\n", "\n", "The code below begins with a function to do the plotting for this so that we can reuse that code for each plot." @@ -385,7 +385,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The plot on the left shows that lowering the detetion threshold does find more sources, but it also includes erroneous detections. The plot on right demonstrates that we can detect some (but not all) of the highly elongated objects by changing the `ratio` parameter in [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder)." + "The plot on the left shows that lowering the detetion threshold does find more sources, but it also includes erroneous detections. The plot on right demonstrates that we can detect some (but not all) of the highly elongated objects by changing the `ratio` parameter in [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder). The reason that we don't pick up more of the elongated sources is that the orientation of the source matters too. The default value for the orientation is zero, i.e. aligned with the horizontal axis." ] }, { @@ -394,7 +394,7 @@ "source": [ "## Summary\n", "\n", - "The takeway from this section should be that the IRAF-like methods implemented in photutils are good at finding star-like sources. As we will see in the next couple of sections, the methods work very well for detecting stars or galaxiues that are small enough to look like stars. After that we will look at a couple of methods, local peak detection and image segmentation, that are better for detecting extended sources." + "The takeway from this section should be that the IRAF-like methods implemented in photutils are good at finding star-like sources. As we will see in the next couple of sections, the methods work very well for detecting stars or galaxies that are small enough to look like stars. After that we will look at a couple of methods, local peak detection and image segmentation, that are better for detecting extended sources." ] } ], @@ -414,7 +414,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.8" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.02.02-IRAF-like-source-detection-XDF.ipynb b/notebooks/photometry/01.02.02-IRAF-like-source-detection-XDF.ipynb index ebfdd57..8b2e816 100644 --- a/notebooks/photometry/01.02.02-IRAF-like-source-detection-XDF.ipynb +++ b/notebooks/photometry/01.02.02-IRAF-like-source-detection-XDF.ipynb @@ -15,7 +15,7 @@ "source": [ "## Data for this notebook \n", "\n", - "We will be manipulating Hubble eXtreme Deep Field (XDF) data, which was collected using the Advanced Camera for Surveys (ACS) on Hubble between 2002 and 2012. The image we use here is the result of 1.8 million seconds (500 hours!) of exposure time, and includes some of the faintest and most distant galaxies that had ever been observed. \n", + "We will again be manipulating Hubble eXtreme Deep Field (XDF) data, which was collected using the Advanced Camera for Surveys (ACS) on Hubble between 2002 and 2012. The image we use here is the result of 1.8 million seconds (500 hours!) of exposure time, and includes some of the faintest and most distant galaxies that had ever been observed. \n", "\n", "*The methods demonstrated here are also available in narrative from within the [`photutils.detection` documentation](http://photutils.readthedocs.io/en/stable/detection.html).*\n", "\n", @@ -177,7 +177,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "With the `DAOStarFinder` [class](http://photutils.readthedocs.io/en/stable/api/photutils.DAOStarFinder.html), `photutils` provides users with an easy application of the popular [DAOFIND](http://stsdas.stsci.edu/cgi-bin/gethelp.cgi?daofind) algorithm ([Stetson 1987, PASP 99, 191](http://adsabs.harvard.edu/abs/1987PASP...99..191S)), originally developed at the Dominion Astrophysical Observatory. \n", + "With the [`DAOStarFinder` class](http://photutils.readthedocs.io/en/stable/api/photutils.DAOStarFinder.html), [`photutils`](https://photutils.readthedocs.io/en/stable/) provides users with an easy application of the popular [DAOFIND](http://stsdas.stsci.edu/cgi-bin/gethelp.cgi?daofind) algorithm ([Stetson 1987, PASP 99, 191](http://adsabs.harvard.edu/abs/1987PASP...99..191S)), originally developed at the Dominion Astrophysical Observatory. \n", "\n", "This algorithm detects sources by:\n", "* Searching for local maxima\n", @@ -218,6 +218,13 @@ "print(sources_dao)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are almost 1,500 sources detected in the image." + ] + }, { "cell_type": "code", "execution_count": null, @@ -251,7 +258,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Looking at a cutout around the relatively large galaxy in the top center of the image is instructive. As you can see below, the large galaxy is identified by [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) as several overlapping sources. Some of the smaller galaxies in the image are identified as sources, but manye are not. The parameters could be tuned to detect more of the sources, but it turns out there are source detection methods better suited to extended sources that will be discussed in [XX BROKEN LINK XX]()." + "Looking at a cutout around the relatively large galaxy in the top center of the image is instructive. As you can see below, the large galaxy is identified by [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) as several overlapping sources. Some of the smaller galaxies in the image are identified as sources, but many are not. The parameters could be tuned to detect more of the sources, but it turns out there are source detection methods better suited to extended sources that will be discussed in [XX BROKEN LINK XX]()." ] }, { @@ -303,13 +310,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Similarly to `DAOStarFinder`, `IRAFStarFinder` is a class that implements a pre-existing algorithm that is widely used within the astronomical community. This class uses the `starfind` [algorithm](http://stsdas.stsci.edu/cgi-bin/gethelp.cgi?starfind) that was originally part of IRAF.\n", + "Similarly to [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder), [`IRAFStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder) is a class that implements a pre-existing algorithm that is widely used within the astronomical community. This class uses the `starfind` [algorithm](http://stsdas.stsci.edu/cgi-bin/gethelp.cgi?starfind) that was originally part of IRAF.\n", "\n", - "`IRAFStarFinder` is fundamentally similar to `DAOStarFinder` in that it detects sources by finding local maxima above a certain threshold that match a Gaussian kernel. However, `IRAFStarFinder` differs in the following ways:\n", + "[`IRAFStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder) is fundamentally similar to [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) in that it detects sources by finding local maxima above a certain threshold that match a Gaussian kernel. However, [`IRAFStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder) differs in the following ways:\n", "* Does not allow users to specify an elliptical Gaussian kernel\n", "* Uses image moments to calculate the centroids, roundness, and sharpness of objects\n", "\n", - "Let's run the `IRAFStarFinder` algorithm on our data, with the same FWHM and threshold, and see how its results differ from `DAOStarFinder`:" + "Let's run the [`IRAFStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder) algorithm on our data, with the same FWHM and threshold, and see how its results differ from [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder):" ] }, { @@ -323,6 +330,13 @@ "print(sources_iraf)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are many fewer sources detected this time -- only 211 compared to the almost 1500 detected by [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder)." + ] + }, { "cell_type": "code", "execution_count": null, diff --git a/notebooks/photometry/01.02.03-Source-detection-FEDER.ipynb b/notebooks/photometry/01.02.03-Source-detection-FEDER.ipynb index 88ce56e..6028a68 100644 --- a/notebooks/photometry/01.02.03-Source-detection-FEDER.ipynb +++ b/notebooks/photometry/01.02.03-Source-detection-FEDER.ipynb @@ -15,7 +15,7 @@ "\n", "The image in this notebook is of the field of the star [TIC 125489084](https://exofop.ipac.caltech.edu/tess/target.php?id=125489084) on a night when the moon was nearly full. The moonlight caused a smooth gradient across the background of the image.\n", "\n", - "As discussed in NOTEBOOK ABOUT BACKGROUND SUBTRACTION, the best way to remove the background in this case is to use photutils to construct a 2D model of the background. \n", + "As discussed in the section about [background subtraction with this image](01.01.03-Background-estimation-FEDER.ipynb), the best way to remove the background in this case is to use photutils to construct a 2D model of the background. \n", "\n", "As we will see in a moment, [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) does a good job identifying most of the sources in the image, since all of the sources are stars." ] @@ -96,23 +96,13 @@ "header = ccd_image.header\n" ] }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Set up the normalization and colormap\n", - "norm_image = ImageNormalize(ccd_image, interval=AsymmetricPercentileInterval(30, 99.5))" - ] - }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Perform 2-D background estimation and subtraction\n", "\n", - "For more discussion of the choices here, see XXXXX. The next couple of cells calculate the background and subtract it, and then display the background-subtracted data." + "For more discussion of the choices here, see [background subtraction with this image](01.01.03-Background-estimation-FEDER.ipynb). The next couple of cells calculate the background and subtract it, and then display the background-subtracted data." ] }, { @@ -162,7 +152,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Note how much more even the 2D background-subtracted image looks; especially the difference between these two images in the bottom left and top right corners. This makes sense, as the background that `Background2D` identified was a gradient from the bottom left corner to the top right corner." + "Note that after subtraction the background of the image looks fairly uniform." ] }, { @@ -171,7 +161,7 @@ "source": [ "## Source detection\n", "\n", - "We will use the [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) to detect the sources in this image. The typical FWHM of the stars in the image is about 6.2 pixels. For the detection threshold, we will use 5 times the standard deviation of the sigma-clipped image.\n", + "We will use the [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) to detect the sources in this image. The typical full width half maximum (FWHM) of the stars in the image is about 6.2 pixels. For the detection threshold, we will use 5 times the standard deviation of the sigma-clipped image.\n", "\n", "In addition, we'll mask out the area within 50 pixels of the edge of the image, where the stars are often elongated on this camera." ] @@ -250,7 +240,7 @@ "source": [ "You might at first be tempted to lower the threshold for detection. It turns out that that will report many more detections, but many of them will not actually be stars. A more effective approach is to increase the FWHM used by [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) instead, to something like 1.5 or 2 × the FWHM of the stars in the image.\n", "\n", - "A comparison of these three approaches is below." + "Below is a comparison of these two approaches with the source detection we did above." ] }, { @@ -322,7 +312,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.9" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.03-Local-peak-detection.ipynb b/notebooks/photometry/01.03-Local-peak-detection.ipynb index dbd5a34..8c8803b 100644 --- a/notebooks/photometry/01.03-Local-peak-detection.ipynb +++ b/notebooks/photometry/01.03-Local-peak-detection.ipynb @@ -4,15 +4,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Local peak detection" + "# Local peak detection\n", + "\n", + "Peak detection finds sources by identifying local maxima above a given threshold and separated by a given distance, rather than trying to fit data to a given model like a Gaussian. It can be particularly useful for identifying non-stellar sources or heavily distorted sources in image data." ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { @@ -31,7 +26,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.8" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.03.01-peak_detection-simulated-image.ipynb b/notebooks/photometry/01.03.01-peak_detection-simulated-image.ipynb new file mode 100644 index 0000000..73ee696 --- /dev/null +++ b/notebooks/photometry/01.03.01-peak_detection-simulated-image.ipynb @@ -0,0 +1,220 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Peak finding" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Simulated image of galaxies and stars" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the first part of this notebook we consider an image that includes 100 sources, all with Gaussian profiles, some of which are fairly elongated. We used this image in the previous section about removing the background prior to source detection.\n", + "\n", + "As usual, we begin with some imports." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import warnings\n", + "\n", + "from astropy.stats import sigma_clipped_stats, gaussian_sigma_to_fwhm\n", + "from astropy.table import QTable\n", + "from astropy.visualization import simple_norm, SqrtStretch\n", + "from astropy.visualization.mpl_normalize import ImageNormalize\n", + "\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "from photutils.aperture import CircularAperture, EllipticalAperture\n", + "from photutils.centroids import centroid_2dg\n", + "from photutils.datasets import make_100gaussians_image, make_gaussian_sources_image\n", + "from photutils.detection import find_peaks, DAOStarFinder\n", + "\n", + "plt.style.use('../photutils_notebook_style.mplstyle')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To begin, let's create the image and subtract the background. Recall that this image contains a mix of star-like object and more extended sources." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data = make_100gaussians_image()\n", + "mean, med, std = sigma_clipped_stats(data, sigma=3.0, maxiters=5)\n", + "data_subtracted = data - med" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Source Detection with peak finding" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For more general source detection cases that do not require comparison with models, `photutils` offers the [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) function. \n", + "\n", + "This function simply finds sources by identifying local maxima above a given threshold and separated by a given distance, rather than trying to fit data to a given model. Unlike the previous detection algorithms, [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) does not necessarily calculate objects' centroids. Unless the `centroid_func` argument is passed a function like `photutils.centroids.centroid_2dg` that can handle source position centroiding, `find_peaks` will return just the integer value of the peak pixel for each source.\n", + "\n", + "This algorithm is particularly useful for identifying non-stellar sources or heavily distorted sources in image data.\n", + "\n", + "The centroid-finding, which we do here only so that we can compare positions found by this method to positions found by the star-finding methods, can generate many warnings when an attempt is made to fit a 2D Gaussian to the sources. The warnings are suppressed below to avoid cluttering the output.\n", + "\n", + "Let's see how it does:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with warnings.catch_warnings(action=\"ignore\"):\n", + " sources_findpeaks = find_peaks(data_subtracted,\n", + " threshold=5. * std, box_size=29,\n", + " centroid_func=centroid_2dg)\n", + "print(sources_findpeaks)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the figure with subplots\n", + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "norm = ImageNormalize(stretch=SqrtStretch())\n", + "fitsplot = ax1.imshow(data, cmap='Greys', origin='lower', norm=norm,\n", + " interpolation='nearest')\n", + "ax1.scatter(sources_findpeaks['x_peak'], sources_findpeaks['y_peak'], s=30, marker='o',\n", + " lw=1, alpha=0.7, facecolor='None', edgecolor='r');\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Comparing Detection Methods\n", + "\n", + "Let's compare how peak finding compares to using [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) on this image.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "detect_fwhm = 5.0\n", + "daofind = DAOStarFinder(fwhm=detect_fwhm, threshold=5.*std) \n", + "sources_dao = daofind(data_subtracted)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(f'''DAOStarFinder: {len(sources_dao)} sources\n", + "find_peaks: {len(sources_findpeaks)} sources''') " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's plot both sets of sources on the original image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "# Plot the data\n", + "fitsplot = ax1.imshow(data, norm=norm, cmap='Greys', origin='lower')\n", + "\n", + "marker_size = 60\n", + "ax1.scatter(sources_findpeaks['x_peak'], sources_findpeaks['y_peak'], s=marker_size, marker='s',\n", + " lw=1, alpha=1, facecolor='None', edgecolor='r', label='Found by find_peaks')\n", + "ax1.scatter(sources_dao['xcentroid'], sources_dao['ycentroid'], s=2 * marker_size, marker='D',\n", + " lw=1, alpha=1, facecolor='None', edgecolor='#0077BB', label='Found by DAOfind')\n", + "\n", + "# Add legend\n", + "ax1.legend(ncol=2, loc='lower center', bbox_to_anchor=(0.5, -0.35))\n", + "\n", + "\n", + "ax1.set_xlabel('X (pixels)')\n", + "ax1.set_ylabel('Y (pixels)')\n", + "ax1.set_title('Sources Found by Different Methods');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That peak findging works better on this image that DAOfind shouldn't be surprising because many of the sources are not round. As we discussed in the section on [source detection in a simulated image](01.02.01-IRAF-like-simulated-image.ipynb), the star-finding techniques are not a good match for an image like this with many extended sources." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary\n", + "\n", + "There are several more sources found by [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) than are found found by the star-finding algorithms, but several are still being missed. In the section on [image segmentation with a simulated image](01.04.01-image-segmentation-simulated-image.ipynb) we will look at a much different approach that is especially well suited to finding extended objects." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.8" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/photometry/01.03.02-peak_detection-XDF.ipynb b/notebooks/photometry/01.03.02-peak_detection-XDF.ipynb index 7b2c9a8..768c409 100644 --- a/notebooks/photometry/01.03.02-peak_detection-XDF.ipynb +++ b/notebooks/photometry/01.03.02-peak_detection-XDF.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Peak finding" + "# Peak finding in the XDF" ] }, { @@ -20,6 +20,13 @@ "*The original authors of this notebook were Lauren Chambers, Erik Tollerud and Tom Wilson.*\n" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We begin with imports." + ] + }, { "cell_type": "code", "execution_count": null, @@ -46,6 +53,13 @@ "%matplotlib inline" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, we load the image and calculate the background statistics." + ] + }, { "cell_type": "code", "execution_count": null, @@ -80,20 +94,18 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Source Detection with `find_peaks`" + "## Source Detection with peak finding" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "For more general source detection cases that do not require comparison with models, `photutils` offers the `find_peaks` function. \n", + "For more general source detection cases that do not require comparison with models, `photutils` offers the [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) function. \n", "\n", - "This function simply finds sources by identifying local maxima above a given threshold and separated by a given distance, rather than trying to fit data to a given model. Unlike the previous detection algorithms, `find_peaks` does not necessarily calculate objects' centroids. Unless the `centroid_func` argument is passed a function like `photutils.centroids.centroid_2dg` that can handle source position centroiding, `find_peaks` will return just the integer value of the peak pixel for each source.\n", + "This function simply finds sources by identifying local maxima above a given threshold and separated by a given distance, rather than trying to fit data to a given model. Unlike the previous detection algorithms, [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) does not necessarily calculate objects' centroids. Unless the `centroid_func` argument is passed a function like `photutils.centroids.centroid_2dg` that can handle source position centroiding, `find_peaks` will return just the integer value of the peak pixel for each source.\n", "\n", - "This algorithm is particularly useful for identifying non-stellar sources or heavily distorted sources in image data.\n", - "\n", - "The centroid-finding, which we do here only so that we can compare positions to the other found by this method to positions found by the star-finding methods, can generate many warnings when an attempt is made to fit a 2D Gaussian to the sources. The warnings are suppressed below to avoid cluttering the output.\n", + "The centroid-finding, which we do here only so that we can compare positions found by this method to positions found by the star-finding methods, can generate many warnings when an attempt is made to fit a 2D Gaussian to the sources. The warnings are suppressed below to avoid cluttering the output.\n", "\n", "Let's see how it does:" ] @@ -111,6 +123,13 @@ "print(sources_findpeaks)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are roughly 1,000 sources identified in the image. Recall from the notebook on [star-finding detection methods](01.02.02-IRAF-like-source-detection-XDF.ipynb) that roughly 1,500 sources were found with [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder)." + ] + }, { "cell_type": "code", "execution_count": null, @@ -156,7 +175,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We once again look at region around the large galaxy. Several sources are detected, more than were found in either of the methods tuned for star finding that were discussed in [XX BROKEN LINK IRAF-LIKE XDF](). A comparison of the three aproaches is later in this section." + "We once again look at region around the large galaxy. A comparison of this approach with the star-finding approaches appears later in this section." ] }, { @@ -313,6 +332,13 @@ "ax1.set_title('Sources Found by Different Methods');" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is pretty cluttered, so we once again look at the region around the large galaxy in the image." + ] + }, { "cell_type": "code", "execution_count": null, @@ -348,13 +374,6 @@ "ax1.set_title('Sources Found by Different Methods');" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "*Remember that you can double-click on the plot to zoom in and look around!*" - ] - }, { "cell_type": "markdown", "metadata": {}, @@ -368,9 +387,9 @@ "source": [ "## Custom Detection Algorithms\n", "\n", - "If none of the algorithms we've reviewed above do exactly what you need, `photutils` also provides infrastructure for you to generate and use your own source detection algorithm: the `StarFinderBase` object can be inherited and used to develop new star-finding classes. Take a look at the [documentation](https://photutils.readthedocs.io/en/latest/api/photutils.detection.StarFinderBase.html#photutils.detection.StarFinderBase) for more information.\n", + "If none of the algorithms we've reviewed above do exactly what you need, [`photutils`](https://photutils.readthedocs.io/en/stable/) also provides infrastructure for you to generate and use your own source detection algorithm: the `StarFinderBase` object can be inherited and used to develop new star-finding classes. Take a look at the [documentation](https://photutils.readthedocs.io/en/latest/api/photutils.detection.StarFinderBase.html#photutils.detection.StarFinderBase) for more information.\n", "\n", - "If you do go that route, remember that `photutils` is open source; you would be very welcome to [open a pull request](https://github.com/astropy/photutils/blob/master/CONTRIBUTING.rst) and incorporate your new star finder into the `photutils` source code – for everyone to use!" + "If you do go that route, remember that [`photutils`](https://photutils.readthedocs.io/en/stable/) is open source; you would be very welcome to [open a pull request](https://github.com/astropy/photutils/blob/master/CONTRIBUTING.rst) and incorporate your new star finder into the [`photutils`](https://photutils.readthedocs.io/en/stable/) source code – for everyone to use!" ] }, { diff --git a/notebooks/photometry/01.03.03-peak_detection-Feder.ipynb b/notebooks/photometry/01.03.03-peak_detection-Feder.ipynb new file mode 100644 index 0000000..6d92935 --- /dev/null +++ b/notebooks/photometry/01.03.03-peak_detection-Feder.ipynb @@ -0,0 +1,295 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Peak finding in a stellar image" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data used in this notebook\n", + "\n", + "The image in this notebook is of the field of the star [TIC 125489084](https://exofop.ipac.caltech.edu/tess/target.php?id=125489084) on a night when the moon was nearly full. The moonlight caused a smooth gradient across the background of the image.\n", + "\n", + "As discussed in the section about [background subtraction with this image](01.01.03-Background-estimation-FEDER.ipynb), the best way to remove the background in this case is to use photutils to construct a 2D model of the background. \n", + "\n", + "\n", + "As we will see in a moment, [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) does a good job identifying most of the sources in the image, since all of the sources are stars." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import warnings\n", + "\n", + "from astropy.nddata import CCDData\n", + "from astropy.stats import sigma_clipped_stats, gaussian_sigma_to_fwhm, SigmaClip\n", + "from astropy.table import QTable\n", + "from astropy.visualization import simple_norm, SqrtStretch\n", + "from astropy.visualization.mpl_normalize import AsymmetricPercentileInterval, ImageNormalize\n", + "\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "from photutils.aperture import CircularAperture, EllipticalAperture\n", + "from photutils.background import Background2D, MeanBackground\n", + "from photutils.centroids import centroid_2dg\n", + "from photutils.datasets import make_100gaussians_image, make_gaussian_sources_image\n", + "from photutils.detection import find_peaks, DAOStarFinder\n", + "\n", + "plt.style.use('../photutils_notebook_style.mplstyle')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To begin, let's create the image and subtract the background. Recall that this image contains only stars." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ccd_image = CCDData.read('TIC_125489084.01-S001-R055-C001-ip.fit.bz2')\n", + "\n", + "sigma_clip = SigmaClip(sigma=3., maxiters=5)\n", + "bkg_estimator = MeanBackground()\n", + "bkg = Background2D(ccd_image, box_size=200, filter_size=(9, 9), mask=ccd_image.mask,\n", + " sigma_clip=sigma_clip, bkg_estimator=bkg_estimator)\n", + "# Calculate the 2D background subtraction, maintaining metadata, unit, and mask\n", + "ccd_2d_bkgdsub = ccd_image.subtract(bkg.background)\n", + "\n", + "data_subtracted = ccd_2d_bkgdsub.data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "mean, med, std = sigma_clipped_stats(data_subtracted, sigma=3.0, maxiters=5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Source Detection with peak finding" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As in the previous two peak finding examples, we include a centroid function so that we can compare the sources detected by peak finding with the sources identified by the star-finding algorithms.\n", + "\n", + "The centroid-finding, which we do here only so that we can compare positions found by this method to positions found by the star-finding methods, can generate many warnings when an attempt is made to fit a 2D Gaussian to the sources. The warnings are suppressed below to avoid cluttering the output." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with warnings.catch_warnings(action=\"ignore\"):\n", + " sources_findpeaks = find_peaks(data_subtracted,\n", + " threshold=5. * std, box_size=29,\n", + " centroid_func=centroid_2dg)\n", + "print(sources_findpeaks)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The roughly 550 sources identified by [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) is much larger than the number detected by the [star-finding approach](01.02.03-Source-detection-FEDER.ipynb).\n", + "\n", + "Let's take a look at the detections on the image to see what is going on." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the figure with subplots\n", + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "# Plot the 2D-subtracted data\n", + "norm = ImageNormalize(ccd_2d_bkgdsub, interval=AsymmetricPercentileInterval(30, 99.5))\n", + "\n", + "fitsplot = ax1.imshow(data_subtracted, cmap='viridis', origin='lower', norm=norm,\n", + " interpolation='nearest')\n", + "ax1.scatter(sources_findpeaks['x_peak'], sources_findpeaks['y_peak'], s=30, marker='o',\n", + " lw=1, alpha=0.7, facecolor='None', edgecolor='r')\n", + "ax1.set_title(\"find_peaks, no masking\")\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Mask bad regions of the CCCD" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The column on the left side of this CCD is not actually exposed to light, so [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) finds a bunch of spurious sources. Recall that we can provide a mask to the source finder. As we did in the [star-finding approach](01.02.03-Source-detection-FEDER.ipynb), we exclude the region within 50 pixels of the edge." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Create a mask array\n", + "mask = np.zeros_like(ccd_2d_bkgdsub, dtype=bool)\n", + "\n", + "# Set the mask for a border of 50 pixels around the image\n", + "mask[:50, :] = mask[-50:, :] = mask[:, :50] = mask[:, -50:] = 1.0\n", + "\n", + "with warnings.catch_warnings(action=\"ignore\"):\n", + " sources_findpeaks = find_peaks(data_subtracted, mask=mask,\n", + " threshold=5. * std, box_size=29,\n", + " centroid_func=centroid_2dg)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the figure with subplots\n", + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "# Plot the 2D-subtracted data\n", + "norm = ImageNormalize(ccd_2d_bkgdsub, interval=AsymmetricPercentileInterval(30, 99.5))\n", + "\n", + "fitsplot = ax1.imshow(data_subtracted, cmap='viridis', origin='lower', norm=norm,\n", + " interpolation='nearest')\n", + "ax1.scatter(sources_findpeaks['x_peak'], sources_findpeaks['y_peak'], s=30, marker='o',\n", + " lw=1, alpha=0.7, facecolor='None', edgecolor='r')\n", + "\n", + "ax1.set_title(\"find_peaks, image edges masked\");" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This looks quite a bit more reasonable." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Comparing Detection Methods\n", + "\n", + "Let's compare how peak finding compares to using [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) on this image.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Create the star finder\n", + "fwhm = 6.2\n", + "daofinder = DAOStarFinder(threshold=5 * std, fwhm=2 * fwhm)\n", + "sources_dao = daofinder(ccd_2d_bkgdsub.data, mask=mask)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(f'''DAOStarFinder: {len(sources_dao)} sources\n", + "find_peaks: {len(sources_findpeaks)} sources''') " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this example, many more sources are detected by [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) than by [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks). A plot showing both detection methods illustrates that the star-finding approach is doing a better job here." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "# Plot the data\n", + "fitsplot = ax1.imshow(data_subtracted, norm=norm, cmap='Greys', origin='lower')\n", + "\n", + "marker_size = 60\n", + "ax1.scatter(sources_findpeaks['x_peak'], sources_findpeaks['y_peak'], s=marker_size, marker='s',\n", + " lw=1, alpha=1, facecolor='None', edgecolor='r', label='Found by find_peaks')\n", + "ax1.scatter(sources_dao['xcentroid'], sources_dao['ycentroid'], s=2 * marker_size, marker='D',\n", + " lw=1, alpha=1, facecolor='None', edgecolor='#0077BB', label='Found by DAOfind')\n", + "\n", + "# Add legend\n", + "ax1.legend(ncol=2, loc='lower center', bbox_to_anchor=(0.5, -0.20))\n", + "\n", + "\n", + "ax1.set_xlabel('X (pixels)')\n", + "ax1.set_ylabel('Y (pixels)')\n", + "ax1.set_title('Sources Found by Different Methods');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary\n", + "\n", + "In this case, [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) performs better than [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks). This makes sense, since [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) is tuned for finding stars, and all of the sources in this image are stars." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.8" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/photometry/01.04-image-segmentation.ipynb b/notebooks/photometry/01.04-image-segmentation.ipynb new file mode 100644 index 0000000..1eda5f0 --- /dev/null +++ b/notebooks/photometry/01.04-image-segmentation.ipynb @@ -0,0 +1,34 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Image segmentation\n", + "\n", + "[Image segmentation](https://en.wikipedia.org/wiki/Image_segmentation) is an image processing technique that identifies regions in an image that are above a threshold and are adjacent. It is the method used by the [Source Extractor](https://sextractor.readthedocs.io/en/latest/index.html)package to dect sources. An implementation is provided as part of [`photutils`](https://photutils.readthedocs.io/en/stable/)." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/photometry/01.04.01-image-segmentation-simulated-image.ipynb b/notebooks/photometry/01.04.01-image-segmentation-simulated-image.ipynb new file mode 100644 index 0000000..6b9c64d --- /dev/null +++ b/notebooks/photometry/01.04.01-image-segmentation-simulated-image.ipynb @@ -0,0 +1,433 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Image segmentation with a simulated image" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As usual, we begin with some imports." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import warnings\n", + "\n", + "from astropy.convolution import convolve\n", + "from astropy.stats import sigma_clipped_stats, gaussian_sigma_to_fwhm\n", + "from astropy.table import QTable\n", + "from astropy.visualization import simple_norm, SqrtStretch\n", + "from astropy.visualization.mpl_normalize import ImageNormalize\n", + "\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "from photutils.aperture import CircularAperture, EllipticalAperture\n", + "from photutils.centroids import centroid_2dg\n", + "from photutils.datasets import make_100gaussians_image, make_gaussian_sources_image, make_noise_image\n", + "from photutils.detection import find_peaks, DAOStarFinder\n", + "from photutils.segmentation import detect_sources, make_2dgaussian_kernel, SourceCatalog\n", + "\n", + "plt.style.use('../photutils_notebook_style.mplstyle')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## More about image segmentation using a small example " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Before diving into an applicatino of image segmentation to a simulated image, some more description of the technique may be helpful. We contruct a small imge 10 pixels on a side. We will start with some noise and add set the value in several pixels well above the standard deviation of the noise." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Make the noise\n", + "std_dev = 5\n", + "image = make_noise_image([10, 10], stddev=std_dev, mean=0)\n", + "\n", + "# Make a source smaller than the image\n", + "source = np.array(\n", + " [\n", + " [1, 2, 3, 1],\n", + " [1, 0.5, 2, 3],\n", + " [1, 3, 2, 1],\n", + " [0, 2, 1, 0]\n", + " ]\n", + ")\n", + "\n", + "# Make sure the source pixels are much larger than the background\n", + "source = 10.0 * std_dev * source\n", + "\n", + "# Put one source in the upper left...\n", + "image[1:5, 0:4] += source\n", + "# ...and one source in the lower right, and transpose it to make it look different\n", + "image[-5:-1, -4:] += source.T" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.figure(figsize=(4, 4))\n", + "plt.imshow(image, cmap=\"viridis\")\n", + "\n", + "plt.title(\"Simulated image\")\n", + "# Suppress tick labels\n", + "plt.xticks([])\n", + "plt.yticks([]);" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we create a detection image, called `detected_image` below, in which all pixels with value larger than 3 times the standard deviation are set to one, and the rest set to zero. In this example, the detect pixels fall into one of two connected groups, by construction.\n", + "\n", + "This detection is shown below, with the pixels in the two sources labeled by either \"1\" or \"2\" and the rest of the pixels labeled \"0\"." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "source_pixels = image > 3 * std_dev\n", + "detected_image = image.copy()\n", + "detected_image[~source_pixels] = 0\n", + "# This intiially labels all sources with the same number, 1\n", + "detected_image[source_pixels] = 1 \n", + "\n", + "# The sources were added in a way that makes it easy to separate the sources,\n", + "# so number the source in the lower right 2\n", + "detected_image[5:, 5:] *= 2\n", + "\n", + "\n", + "plt.figure(figsize=(4, 4))\n", + "plt.imshow(detected_image, cmap=\"viridis\")\n", + "\n", + "for x in range(10):\n", + " for y in range(10):\n", + " plt.annotate(f\"{detected_image[y, x]:.0f}\", (x, y), va=\"center\", ha=\"center\", color=\"cyan\")\n", + "plt.xticks([])\n", + "plt.yticks([])\n", + "plt.title(\"Segmentation image\");" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "That, in a nutshell, is image segmentation. The implementation is more complicated than what is done above, and the image is typically convolved with a filter before doing the segmentation, but the gist of the method is straightforward." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Image segmentation with a larger image\n", + "\n", + "We again consider the simulated image with 100 Gaussian sources that was introduced in [background removal with a simulated image](01.01.01-Background-removal-simulated-data.ipynb)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To begin, let's create the image and subtract the background. Recall that this image contains a mix of star-like object and more extended sources, and the sigma clipped median does a reasonable job of estimating the background." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data = make_100gaussians_image()\n", + "mean, med, std = sigma_clipped_stats(data, sigma=3.0, maxiters=5)\n", + "data_subtracted = data - med" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Settings for image segmentation" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are a few settings we need to make before detecting sources with iumage segmentation using the [`photutils`](https://photutils.readthedocs.io/en/stable/) function [`detect_sources`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.detect_sources.html#photutils.segmentation.detect_sources):\n", + "\n", + "+ Full-width half-maximum (FWHM) of the typical source in the image, represented by`fwhm` in the code below. This is used to smooth the image before performing the segmentation.\n", + "+ Threshold for a pixel to count as part of a source, typically expressed as a multiple of the standard deviation in the image and represented below by `threshold`.\n", + "+ Number of adjacent pixels above the threshold to count as a source, represented by `npixels` below.\n", + "+ Size of the kernel used for smoothing, represented by `kernel_size` below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Define threshold and minimum object size\n", + "threshold = 3. * std\n", + "npixels = 10\n", + "fwhm = 3\n", + "kernel_size = 5" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With those settings out of the way, there are three steps to the detection:\n", + "\n", + "1. Create the smoothing kernel.\n", + "2. Apply the smoothing by convolving the kernel with the original image.\n", + "3. Perform image segmentation on the convovled image." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Make the kernel\n", + "kernel = make_2dgaussian_kernel(fwhm, size=kernel_size)\n", + "\n", + "# Make a convolved image\n", + "convolved_data = convolve(data_subtracted, kernel)\n", + "\n", + "# Create a segmentation image from the convolved image\n", + "segm = detect_sources(convolved_data, threshold, npixels)\n", + "\n", + "print('Found {} sources'.format(segm.max_label))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The 82 sources found here is the closest we have yet come to detecting all 100 sources that are in this image." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Why find peaks here??" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The original image, convolved image and segmentation image are shown below. Each color in the segmentation image represents a different source. The color map for this case is part of the [`SegmentationImage`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.SegmentationImage.html#photutils.segmentation.SegmentationImage) generated by [`detect_sources`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.detect_sources.html#photutils.segmentation.detect_sources)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the figure with subplots\n", + "fig, [ax1, ax2, ax3] = plt.subplots(3, 1, figsize=(6, 12))\n", + "plt.tight_layout()\n", + "\n", + "# Plot the data\n", + "# Set up the normalization and colormap\n", + "norm_image = ImageNormalize(stretch=SqrtStretch())\n", + "\n", + "# Plot the original data\n", + "fitsplot = ax1.imshow(data_subtracted,\n", + " norm=norm_image, cmap='Greys_r')\n", + "ax1.set_ylabel('Y (pixels)')\n", + "ax1.set_title('Original Data')\n", + "\n", + "# Plot to convolved data\n", + "convolved_plot = ax2.imshow(convolved_data, cmap=\"Greys_r\", norm=norm_image)\n", + "ax2.set_ylabel(\"Y (pixel)\")\n", + "ax2.set_title(\"Convolved image\")\n", + "\n", + "# Plot the segmentation image\n", + "rand_cmap = make_random_cmap(seed=12345)\n", + "rand_cmap.set_under('black')\n", + "segplot = ax3.imshow(segm, vmin=1, cmap=segm.cmap)\n", + "ax3.set_ylabel(\"Y (pixel)\")\n", + "ax3.set_xlabel('X (pixels)')\n", + "ax3.set_title('Segmentation Map')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Much more information about the segmentation sources can be calculated using the [SourceCatalog](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.SourceCatalog.html#photutils.segmentation.SourceCatalog) from [`photutils`](https://photutils.readthedocs.io/en/stable/). This object will be discussed in more detail in the next notebook. For now we use it to construct apertures for source in the segmentation image that will be used when we compare image segmentation to the other source detection methods we have discussed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "catalog = SourceCatalog(data_subtracted, segm)\n", + "table = catalog.to_table()\n", + "\n", + "# Define the approximate isophotal extent\n", + "r = 4. # pixels\n", + "\n", + "# Create the apertures\n", + "apertures = []\n", + "for obj in catalog:\n", + " position = (obj.xcentroid, obj.ycentroid)\n", + " a = obj.semimajor_sigma.value * r\n", + " b = obj.semiminor_sigma.value * r\n", + " theta = obj.orientation\n", + " apertures.append(EllipticalAperture(position, a, b, theta=theta))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Comparing Detection Methods\n", + "\n", + "Let's compare how image segmentation compares to using [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) and [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) on this image.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Detect sources with DAOFIND\n", + "daofind = DAOStarFinder(fwhm=fwhm, threshold=threshold) \n", + "sources_dao = daofind(data_subtracted)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Detect sources with find_peaks\n", + "with warnings.catch_warnings(action=\"ignore\"):\n", + " sources_findpeaks = find_peaks(data_subtracted,\n", + " threshold=5. * std, box_size=29,\n", + " centroid_func=centroid_2dg)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(f'''DAOStarFinder: {len(sources_dao)} sources\n", + "find_peaks: {len(sources_findpeaks)} sources\n", + "segmentation: {len(apertures)} sources''') " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "A graph of the source locations is helpful." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "# Plot the data\n", + "fitsplot = ax1.imshow(data, norm=norm_image, cmap='Greys_r', origin='lower')\n", + "\n", + "marker_size = 60\n", + "ax1.scatter(sources_findpeaks['x_peak'], sources_findpeaks['y_peak'], s=marker_size, marker='s',\n", + " lw=1, alpha=1, facecolor='None', edgecolor='r', label='Found by find_peaks')\n", + "ax1.scatter(sources_dao['xcentroid'], sources_dao['ycentroid'], s=2 * marker_size, marker='D',\n", + " lw=1, alpha=1, facecolor='None', edgecolor='#0077BB', label='Found by DAOfind')\n", + "\n", + "# Plot the apertures\n", + "apertures[0].plot(color='cyan', lw=1, alpha=1, axes=ax1, label=\"Image segmentation\")\n", + "for aperture in apertures[1:]:\n", + " aperture.plot(color='cyan', lw=1, alpha=1, axes=ax1)\n", + " \n", + "# Add legend\n", + "ax1.legend(ncol=3, loc='lower center', bbox_to_anchor=(0.5, -0.35))\n", + "\n", + "# Set the limits to avoid whitespace around the image\n", + "ax1.set_xlim(0, 500)\n", + "ax1.set_ylim(0, 250)\n", + "\n", + "ax1.set_xlabel('X (pixels)')\n", + "ax1.set_ylabel('Y (pixels)')\n", + "ax1.set_title('Sources Found by Different Methods');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary\n", + "\n", + "Image segmentation identified more of the sources in this test image than either [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) or [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder). The latter wasn't expected to work well on this image because there is a large variety of source shapes and sizes. The undetected sources look relatively large and faint. Experimenting with the segmentation parameters would likely improve the detection." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/photometry/01.04.02-image-segmentation-XDF.ipynb b/notebooks/photometry/01.04.02-image-segmentation-XDF.ipynb index edd8081..3710678 100644 --- a/notebooks/photometry/01.04.02-image-segmentation-XDF.ipynb +++ b/notebooks/photometry/01.04.02-image-segmentation-XDF.ipynb @@ -1,29 +1,105 @@ { "cells": [ + { + "cell_type": "markdown", + "id": "06c28589-66fe-422c-bcef-28df5a5b6583", + "metadata": {}, + "source": [ + "# Application to the XDF" + ] + }, + { + "cell_type": "markdown", + "id": "4a539fa2-f180-4b88-a620-da47c6af8270", + "metadata": {}, + "source": [ + "## Data for this notebook \n", + "\n", + "We will be manipulating Hubble eXtreme Deep Field (XDF) data, which was collected using the Advanced Camera for Surveys (ACS) on Hubble between 2002 and 2012. The image we use here is the result of 1.8 million seconds (500 hours!) of exposure time, and includes some of the faintest and most distant galaxies that had ever been observed. \n", + "\n", + "*The methods demonstrated here are also available in the [`photutils.detection` documentation](http://photutils.readthedocs.io/en/stable/detection.html).*\n", + "\n", + "*The original authors of this notebook were Lauren Chambers, Erik Tollerud and Tom Wilson.*\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7e90842d-8f03-408e-8622-532d1fab18c7", + "metadata": {}, + "outputs": [], + "source": [ + "import warnings\n", + "\n", + "from astropy.convolution import convolve\n", + "from astropy.nddata import CCDData\n", + "from astropy.stats import sigma_clipped_stats, SigmaClip\n", + "import astropy.units as u\n", + "from astropy.visualization import ImageNormalize, LogStretch\n", + "\n", + "import matplotlib.pyplot as plt\n", + "from matplotlib.ticker import LogLocator\n", + "\n", + "import numpy as np\n", + "\n", + "from photutils.aperture import EllipticalAperture\n", + "from photutils.background import Background2D, MeanBackground\n", + "from photutils.centroids import centroid_2dg\n", + "from photutils.detection import find_peaks, DAOStarFinder, IRAFStarFinder\n", + "from photutils.segmentation import detect_sources, make_2dgaussian_kernel, SourceCatalog\n", + "from photutils.utils import make_random_cmap\n", + "\n", + "# Show plots in the notebook\n", + "%matplotlib inline" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7fe76ad2-51c6-48fd-b91e-541a1ea9bd90", + "metadata": {}, + "outputs": [], + "source": [ + "xdf_image = CCDData.read('hlsp_xdf_hst_acswfc-60mas_hudf_f435w_v1_sci.fits')\n", + "# Define the mask\n", + "mask = xdf_image.data == 0\n", + "xdf_image.mask = mask" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "82f387f7-66af-4be6-bd2f-4f54f87ba6ed", + "metadata": {}, + "outputs": [], + "source": [ + "mean, median, std = sigma_clipped_stats(xdf_image.data, sigma=3.0, maxiters=5, mask=xdf_image.mask)" + ] + }, { "cell_type": "code", "execution_count": null, - "id": "ae3f0157-65cf-494b-a2fb-557a361e6ca8", + "id": "2b4110f0-a867-42fb-8eac-885a4883c4d6", "metadata": {}, "outputs": [], - "source": [] + "source": [ + "plt.style.use('../photutils_notebook_style.mplstyle')" + ] }, { "cell_type": "markdown", "id": "21c15890-d5d2-4f5a-a1dd-7795e0f609a7", - "metadata": { - "jp-MarkdownHeadingCollapsed": true - }, + "metadata": {}, "source": [ "Beyond traditional source detection methods, an additional option for identifying sources in image data is a process called **image segmentation**. This method identifies and labels contiguous (connected) objects within an image. \n", "\n", "You might have noticed that, in the previous source detection algorithms, large and extended sources are often incorrectly identified as more than one source. Segmentation would label all the pixels within a large galaxy as belonging to the same object, and would allow the user to then measure the photometry, centroid, and morphology of the entire object at once.\n", "\n", - "#### Creating a `SegmentationImage`\n", + "## Creating a `SegmentationImage`\n", "\n", - "In `photutils`, image segmentation maps are created using the threshold method in the `detect_sources()` function. This method identifies all of the objects in the data that have signals above a determined **`threshold`** (usually defined as a multiple of the standard deviation) and that have more than a defined number of adjoining pixels, **`npixels`**. The data can also optionally be smoothed using a kernel, **`filter_kernel`**, before applying the threshold cut.\n", + "In [`photutils`](https://photutils.readthedocs.io/en/stable/), image segmentation maps are created using the threshold method in the [`detect_sources`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.detect_sources.html#photutils.segmentation.detect_sources) function. This method identifies all of the objects in the data that have signals above a determined **`threshold`** (usually defined as a multiple of the standard deviation) and that have more than a defined number of adjoining pixels, **`npixels`**. The data can also optionally be smoothed using a kernel, **`filter_kernel`**, before applying the threshold cut.\n", "\n", - "The `detect_sources()` function returns a `SegmentationImage` object: an array in which each object is labeled with an integer. As a simple example, a segmentation map containing two distinct sources might look like this:\n", + "The [`detect_sources`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.detect_sources.html#photutils.segmentation.detect_sources) function returns a [`SegmentationImage`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.SegmentationImage.html#photutils.segmentation.SegmentationImage) object: an array in which each object is labeled with an integer. As a simple example, a segmentation map containing two distinct sources might look like this:\n", "\n", "```\n", "0 0 0 0 0 0 0 0 0 0\n", @@ -38,18 +114,7 @@ "```\n", "where all of the pixels labeled `1` belong to the first source, all those labeled `2` belong to the second, and all null pixels are designated to be background.\n", "\n", - "Let's see what the segmentation map for our XDF data will look like." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "fdc588d5-f41b-4576-8585-c917940472e4", - "metadata": {}, - "outputs": [], - "source": [ - "from photutils import detect_sources\n", - "from photutils.utils import make_random_cmap" + "Let's see make the segmentation image for the XDF see what that segmentation map looks like." ] }, { @@ -61,14 +126,25 @@ "source": [ "# Define threshold and minimum object size\n", "threshold = 5. * std\n", - "npixels = 15\n", - "\n", + "npixels = 10\n", + "fwhm = 5\n", + "kernel_size = 5\n", + "kernel = make_2dgaussian_kernel(fwhm, size=kernel_size)\n", + "convolved_data = convolve(xdf_image.data, kernel)\n", "# Create a segmentation image\n", - "segm = detect_sources(xdf_image.data, threshold, npixels)\n", + "segm = detect_sources(convolved_data, threshold, npixels)\n", "\n", "print('Found {} sources'.format(segm.max_label))" ] }, + { + "cell_type": "markdown", + "id": "99298087-4b26-4eab-8a9b-deb3aea08fb5", + "metadata": {}, + "source": [ + "The number of sources found with this method is similar to that found with the other methods we have considered. A more detailed comparison is made later in this section." + ] + }, { "cell_type": "code", "execution_count": null, @@ -80,23 +156,27 @@ "fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(12, 6))\n", "plt.tight_layout()\n", "\n", + "# Plot the data\n", + "# Set up the normalization and colormap\n", + "norm_image = ImageNormalize(vmin=1e-4, vmax=5e-2, stretch=LogStretch(), clip=False)\n", + "cmap = plt.get_cmap('viridis')\n", + "cmap.set_bad('white') # Show masked data as white\n", + "\n", "# Plot the original data\n", - "fitsplot = ax1.imshow(np.ma.masked_where(xdf_image.mask, xdf_image_clipped),\n", + "fitsplot = ax1.imshow(np.ma.masked_where(xdf_image.mask, xdf_image),\n", " norm=norm_image, cmap=cmap)\n", "ax1.set_ylabel('Y (pixels)')\n", "ax1.set_title('Original Data')\n", "\n", "# Plot the segmentation image\n", - "rand_cmap = make_random_cmap(seed=12345)\n", - "rand_cmap.set_under('black')\n", - "segplot = ax2.imshow(np.ma.masked_where(xdf_image.mask, segm), vmin=1, cmap=rand_cmap)\n", + "segplot = ax2.imshow(np.ma.masked_where(xdf_image.mask, segm), vmin=1, cmap=segm.cmap)\n", "ax2.set_xlabel('X (pixels)')\n", "ax2.set_title('Segmentation Map')\n", "\n", - "# Define the colorbar\n", - "cbar_ax = fig.add_axes([1, 0.09, 0.03, 0.87])\n", - "cbar = plt.colorbar(segplot, cbar_ax)\n", - "cbar.set_label('Object Label', rotation=270, labelpad=30)" + "# # Define the colorbar\n", + "# cbar_ax = fig.add_axes([1, 0.09, 0.03, 0.87])\n", + "# cbar = plt.colorbar(segplot, cbar_ax)\n", + "# cbar.set_label('Object Label', rotation=270, labelpad=30)" ] }, { @@ -111,131 +191,218 @@ }, { "cell_type": "markdown", - "id": "e9bb56ee-c640-4c69-976e-e6f2e3ef536c", + "id": "3e15e7d8-3aa9-4b75-ab03-88d3d6461eb5", "metadata": {}, "source": [ - "For a closer look, let's see what the sources we found with `find_peaks` look like in this segmentation map:" + "## Analyzing `source_properties`\n", + "\n", + "Once we have a [`SegmentationImage`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.SegmentationImage.html#photutils.segmentation.SegmentationImage) object, [`photutils`](https://photutils.readthedocs.io/en/stable/) provides many powerful tools to manipulate and analyze the identified objects. \n", + "\n", + "Individual objects within the segmentation map can be altered using methods such as `relabel` to change the labels of objects, `remove_labels` to remove objects, or `deblend_sources` to separating overlapping sources that were incorrectly labeled as one source.\n", + "\n", + "However, perhaps the most powerful aspect of the [`SegmentationImage`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.SegmentationImage.html#photutils.segmentation.SegmentationImage) is the ability to create a catalog using [`SourceCatalog`](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.SourceCatalog.html#photutils.segmentation.SourceCatalog) to measure the centroids, photometry, and morphology of the detected objects:" ] }, { "cell_type": "code", "execution_count": null, - "id": "fd673ddd-bcf4-4e49-8281-a0bbc75ca080", + "id": "31715b6e-7cc4-47d5-be5b-b49e08740a36", "metadata": {}, "outputs": [], "source": [ - "fig, axs = plt.subplots(3,3, figsize=(3, 3))\n", - "plt.subplots_adjust(wspace=0.1, hspace=0.1)\n", - "\n", - "cutout_size = 20\n", - "\n", - "srcs = np.random.permutation(sources_findpeaks)[:axs.size]\n", - "for ax, src in zip(axs.ravel(), srcs):\n", - " slc = (slice(int(src['y_peak'] - cutout_size), int(src['y_peak'] + cutout_size)),\n", - " slice(int(src['x_peak'] - cutout_size), int(src['x_peak'] + cutout_size)))\n", - " ax.imshow(segm.data[slc], cmap=rand_cmap, vmin=1, vmax=len(sources_findpeaks))\n", - " src_id = np.where((sources_findpeaks['x_peak'] == src['x_peak']) &\n", - " (sources_findpeaks['y_peak'] == src['y_peak']))[0][0]\n", - " ax.text(2, 2, str(src_id), color='w', va='top')\n", - " ax.set_xticks([])\n", - " ax.set_yticks([])" + "catalog = SourceCatalog(xdf_image.data, segm)\n", + "table = catalog.to_table()\n", + "print(table)\n", + "print(table.colnames)" ] }, { "cell_type": "markdown", - "id": "40fd8f04-d36e-4d43-b157-60c7b2ee6a3c", + "id": "350de88c-eaa6-46d0-a964-d873d77cd297", "metadata": {}, "source": [ - "
\n", + "### Creating apertures from segmentation data\n", "\n", - "

Exercises:


\n", - "\n", - "Recompute the SegmentationImage, but alter the threshold and the minimum number of pixels in a source. How does changing the threshold affect the results? What about changing the number of pixels?\n", + "We can use this information to create isophotal ellipses for each identified source. These ellipses can also later be used as photometric apertures." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "71404fa0-1fd2-4f67-aa4a-4c408e0e7ddc", + "metadata": {}, + "outputs": [], + "source": [ + "# Define the approximate isophotal extent\n", + "r = 4. # pixels\n", "\n", - "
" + "# Create the apertures\n", + "apertures = []\n", + "for obj in catalog:\n", + " position = (obj.xcentroid, obj.ycentroid)\n", + " a = obj.semimajor_sigma.value * r\n", + " b = obj.semiminor_sigma.value * r\n", + " theta = obj.orientation\n", + " apertures.append(EllipticalAperture(position, a, b, theta=theta))" ] }, { - "cell_type": "markdown", - "id": "3e15e7d8-3aa9-4b75-ab03-88d3d6461eb5", + "cell_type": "code", + "execution_count": null, + "id": "8750e2c1-b13f-4b78-9026-59e9efe31122", "metadata": {}, + "outputs": [], "source": [ - "#### Analyzing `source_properties`\n", + "# Set up the figure with subplots\n", + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", "\n", - "Once we have a `SegmentationImage` object, `photutils` provides many powerful tools to manipulate and analyze the identified objects. \n", + "# Plot the data\n", + "fitsplot = ax1.imshow(np.ma.masked_where(xdf_image.mask, xdf_image),\n", + " norm=norm_image, cmap=cmap)\n", "\n", - "Individual objects within the segmentation map can be altered using methods such as `relabel` to change the labels of objects, `remove_labels` to remove objects, or `deblend_sources` to separating overlapping sources that were incorrectly labeled as one source.\n", + "# Plot the apertures\n", + "for aperture in apertures:\n", + " aperture.plot(color='red', lw=1, alpha=0.7, axes=ax1)\n", + "\n", + "# Define the colorbar\n", + "cbar = plt.colorbar(fitsplot, fraction=0.046, pad=0.04, ticks=LogLocator())\n", + "\n", + "def format_colorbar(bar):\n", + " # Add minor tickmarks\n", + " bar.ax.yaxis.set_minor_locator(LogLocator(subs=range(1, 10)))\n", "\n", - "However, perhaps the most powerful aspect of the `SegmentationImage` is the ability to create a catalog using `source_properties` to measure the centroids, photometry, and morphology of the detected objects:" + " # Force the labels to be displayed as powers of ten and only at exact powers of ten\n", + " bar.ax.set_yticks([1e-4, 1e-3, 1e-2])\n", + " labels = [f'$10^{{{pow:.0f}}}$' for pow in np.log10(bar.ax.get_yticks())]\n", + " bar.ax.set_yticklabels(labels)\n", + "\n", + "format_colorbar(cbar)\n", + "\n", + "# Define labels\n", + "cbar.set_label(r'Flux Count Rate ({})'.format(xdf_image.unit.to_string('latex')),\n", + " rotation=270, labelpad=30)\n", + "ax1.set_xlabel('X (pixels)')\n", + "ax1.set_ylabel('Y (pixels)')\n", + "ax1.set_title('Segmentation Image Apertures');" + ] + }, + { + "cell_type": "markdown", + "id": "f7a68668-bf36-46f0-9efa-8063f9643bcd", + "metadata": {}, + "source": [ + "## Comparison with other detection methods" + ] + }, + { + "cell_type": "markdown", + "id": "e9bb56ee-c640-4c69-976e-e6f2e3ef536c", + "metadata": {}, + "source": [ + "We begin by finding sources using [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) and [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks)." ] }, { "cell_type": "code", "execution_count": null, - "id": "6c3b45a9-9bf5-41d1-809b-de6efd8088ae", + "id": "b1d61489-3595-46b9-aaaf-57f11ac2126d", "metadata": {}, "outputs": [], "source": [ - "from photutils import source_properties" + "with warnings.catch_warnings(action=\"ignore\"):\n", + " sources_findpeaks = find_peaks(xdf_image.data - median, mask=xdf_image.mask,\n", + " threshold=20. * std, box_size=29,\n", + " centroid_func=centroid_2dg)\n", + "\n", + "# Add an ID column to the find_peaks result\n", + "sources_findpeaks[\"source_id\"] = np.arange(len(sources_findpeaks))" ] }, { "cell_type": "code", "execution_count": null, - "id": "31715b6e-7cc4-47d5-be5b-b49e08740a36", + "id": "19c62f54-3040-4ec8-9c12-f02d271e3ac4", "metadata": {}, "outputs": [], "source": [ - "catalog = source_properties(xdf_image.data, segm)\n", - "table = catalog.to_table()\n", - "print(table)\n", - "print(table.colnames)" + "daofind = DAOStarFinder(fwhm=fwhm, threshold=threshold)\n", + "sources_dao = daofind(np.ma.masked_where(xdf_image.mask, xdf_image))" ] }, { "cell_type": "markdown", - "id": "350de88c-eaa6-46d0-a964-d873d77cd297", + "id": "9bf12c49-34d3-498e-9586-1525ce1022ea", "metadata": {}, "source": [ - "#### Creating apertures from segmentation data\n", - "\n", - "We can use this information to create isophotal ellipses for each identified source. These ellipses can also later be used as photometric apertures!" + "A plot of a random subset of sources highlights the differences between the methods." ] }, { "cell_type": "code", "execution_count": null, - "id": "896f10cb-ebc6-417b-b412-65c726c7c0af", + "id": "1176098d-8a00-402d-9220-a1c54044c0c4", "metadata": {}, "outputs": [], "source": [ - "from photutils import EllipticalAperture" + "fig = plt.figure(figsize=(8, 2.5))\n", + "sub_figs = fig.subfigures(nrows=2, ncols=4, wspace=0.01, hspace=0.01)\n", + "\n", + "cutout_size = 60\n", + "rng = np.random.default_rng(seed=8675309)\n", + "srcs = rng.permutation(sources_findpeaks)[:len(sub_figs.flatten())]\n", + "for a_fig, src in zip(sub_figs.flatten(), srcs):\n", + " sub_axs = a_fig.subplot_mosaic([[\"orig\", \"segm\"]], sharex=True, sharey=True, )\n", + " left = int(src['x_peak'] - cutout_size)\n", + " bottom = int(src['y_peak'] - cutout_size)\n", + " right = left + 2 * cutout_size\n", + " top = bottom + 2 * cutout_size\n", + " slc = (slice(bottom, top), slice(left, right))\n", + " sources_in_cutout_fp = ((left < sources_findpeaks[\"x_centroid\"]) & (sources_findpeaks[\"x_centroid\"] < right) &\n", + " (bottom < sources_findpeaks[\"y_centroid\"]) & (sources_findpeaks[\"y_centroid\"] < top))\n", + " sources_in_cutout_dao = ((left < sources_dao[\"xcentroid\"]) & (sources_dao[\"xcentroid\"] < right) &\n", + " (bottom < sources_dao[\"ycentroid\"]) & (sources_dao[\"ycentroid\"] < top))\n", + " sub_axs[\"orig\"].imshow(xdf_image.data[slc], cmap=cmap, norm=norm_image, origin=\"lower\")\n", + " sub_axs[\"orig\"].scatter(sources_findpeaks[\"x_centroid\"][sources_in_cutout_fp] - left, \n", + " sources_findpeaks[\"y_centroid\"][sources_in_cutout_fp] - bottom, \n", + " marker=\"x\", color=\"orange\", s=100, label=\"Find peaks\")\n", + " sub_axs[\"orig\"].scatter(sources_dao[\"xcentroid\"][sources_in_cutout_dao] - left, \n", + " sources_dao[\"ycentroid\"][sources_in_cutout_dao] - bottom, \n", + " marker=\"s\", facecolor=\"none\", edgecolor=\"cyan\", s=200, \n", + " alpha=0.5, label=\"DAOFind\")\n", + "\n", + " sub_axs[\"segm\"].imshow(segm.data[slc], cmap=rand_cmap, vmin=1, vmax=len(sources_findpeaks), origin=\"lower\")\n", + " src_id = src[\"source_id\"]\n", + " # ax.text(2, 2, str(src_id), color='w', va='top')\n", + " a_fig.suptitle(f\"Source {src_id}\", fontsize=10)\n", + "\n", + " for ax in sub_axs.values():\n", + " ax.set_xticks([])\n", + " ax.set_yticks([])\n", + "\n", + "fig.suptitle(\"'Source detection method comparison\", y=1.1)\n", + "a_fig.legend(ncols=2, bbox_to_anchor=(-.2, 0.2));" ] }, { - "cell_type": "code", - "execution_count": null, - "id": "71404fa0-1fd2-4f67-aa4a-4c408e0e7ddc", + "cell_type": "markdown", + "id": "fabfbef0-e62a-41a9-974a-4a7b5210fcc8", "metadata": {}, - "outputs": [], "source": [ - "# Define the approximate isophotal extent\n", - "r = 4. # pixels\n", - "\n", - "# Create the apertures\n", - "apertures = []\n", - "for obj in catalog:\n", - " position = (obj.xcentroid.value, obj.ycentroid.value)\n", - " a = obj.semimajor_axis_sigma.value * r\n", - " b = obj.semiminor_axis_sigma.value * r\n", - " theta = obj.orientation.value\n", - " apertures.append(EllipticalAperture(position, a, b, theta=theta))" + "Star finding finds almost all of the point-like sources in these cutouts, but also identifies many sources in each extended object. Peak finding misses many of the star-like objects and sometimes assigns multiple objects to a single extended source. Image segmentation identifies extended sources as single objects but misses most of the small star-like objects. Adjusting the image segmentation parameters to allow smaller sources would improve detection of those objects." + ] + }, + { + "cell_type": "markdown", + "id": "9a700cbb-593c-4a72-b2ef-b1b54a850aa5", + "metadata": {}, + "source": [ + "We now look at the large galaxy near the top center of the XDF. Image segmentation is the only method that correctly identifies that large galaxy as a single object." ] }, { "cell_type": "code", "execution_count": null, - "id": "8750e2c1-b13f-4b78-9026-59e9efe31122", + "id": "f3551192-ea06-452e-a5ab-57c322e8c4fc", "metadata": {}, "outputs": [], "source": [ @@ -243,24 +410,53 @@ "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", "\n", "# Plot the data\n", - "fitsplot = ax1.imshow(np.ma.masked_where(xdf_image.mask, xdf_image_clipped),\n", + "fitsplot = ax1.imshow(np.ma.masked_where(xdf_image.mask, xdf_image),\n", " norm=norm_image, cmap=cmap)\n", "\n", "# Plot the apertures\n", - "for aperture in apertures:\n", - " aperture.plot(color='red', lw=1, alpha=0.7, axes=ax1)\n", + "apertures[0].plot(color='red', lw=2, alpha=1, axes=ax1, label=\"Image segmentation\")\n", + "for aperture in apertures[1:]:\n", + " aperture.plot(color='red', lw=2, alpha=1, axes=ax1)\n", + "\n", + "# Add find_peaks sources\n", + "ax1.scatter(sources_findpeaks[\"x_centroid\"], sources_findpeaks[\"y_centroid\"], \n", + " marker=\"x\", color=\"orange\", s=200, lw=2, alpha=0.7, label=\"find_peaks\")\n", + "\n", + "# Add DAOStarFinder sources\n", + "ax1.scatter(sources_dao[\"xcentroid\"], sources_dao[\"ycentroid\"], \n", + " marker=\"s\", facecolor=\"none\", edgecolor=\"cyan\", s=200, alpha=0.5, label=\"DAOFind\")\n", "\n", "# Define the colorbar\n", - "cbar = plt.colorbar(fitsplot, fraction=0.046, pad=0.04, ticks=LogLocator(subs=range(10)))\n", - "labels = ['$10^{-4}$'] + [''] * 8 + ['$10^{-3}$'] + [''] * 8 + ['$10^{-2}$']\n", - "cbar.ax.set_yticklabels(labels)\n", + "cbar = plt.colorbar(fitsplot, fraction=0.046, pad=0.04, ticks=LogLocator())\n", + "\n", + "format_colorbar(cbar)\n", "\n", "# Define labels\n", "cbar.set_label(r'Flux Count Rate ({})'.format(xdf_image.unit.to_string('latex')),\n", " rotation=270, labelpad=30)\n", "ax1.set_xlabel('X (pixels)')\n", "ax1.set_ylabel('Y (pixels)')\n", - "ax1.set_title('Segmentation Image Apertures')" + "\n", + "top = 1125\n", + "left = 2350\n", + "cutout_size = 500\n", + "\n", + "ax1.set_xlim(left, left + cutout_size)\n", + "ax1.set_ylim(top, top + cutout_size)\n", + "\n", + "ax1.legend(ncols=3, loc='lower center', bbox_to_anchor=(0.5, 1))\n", + "\n", + "ax1.set_title('Source detection method comparison', y=1.05);" + ] + }, + { + "cell_type": "markdown", + "id": "c0b2a1b2-57c0-49a5-8924-5372cd43613c", + "metadata": {}, + "source": [ + "## Summary\n", + "\n", + "The image above nicely captures the differences between the source detection methods we have discussed. [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) does an excellent job detecting the star-like sources in the image but not the more extended sources. [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) detects both extended and point-like sources, but misses several sources. Image segmentation does an excellent job detecting sources, and could detect the smaller sources in this image if we reduced the minimum number of pixels in each source." ] } ], @@ -280,7 +476,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.11.9" } }, "nbformat": 4, diff --git a/notebooks/photometry/01.04.03-image-segmentation-Feder.ipynb b/notebooks/photometry/01.04.03-image-segmentation-Feder.ipynb new file mode 100644 index 0000000..8c96a9a --- /dev/null +++ b/notebooks/photometry/01.04.03-image-segmentation-Feder.ipynb @@ -0,0 +1,387 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "96ae52db-95b9-4f7b-a68e-712789addc7f", + "metadata": {}, + "source": [ + "# Application to a stellar image" + ] + }, + { + "cell_type": "markdown", + "id": "4a539fa2-f180-4b88-a620-da47c6af8270", + "metadata": {}, + "source": [ + "## Data used in this notebook\n", + "\n", + "The image in this notebook is of the field of the star [TIC 125489084](https://exofop.ipac.caltech.edu/tess/target.php?id=125489084) on a night when the moon was nearly full. The moonlight caused a smooth gradient across the background of the image.\n", + "\n", + "As discussed in the section about [background subtraction with this image](01.01.03-Background-estimation-FEDER.ipynb), the best way to remove the background in this case is to use [`photutils`](https://photutils.readthedocs.io/en/stable/) to construct a 2D model of the background. \n", + "\n", + "As we will see shortly, image segmentation doesn't significantly improve source detection in this case, since all of the sources are stars." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7e90842d-8f03-408e-8622-532d1fab18c7", + "metadata": {}, + "outputs": [], + "source": [ + "import warnings\n", + "\n", + "from astropy.convolution import convolve\n", + "from astropy.nddata import CCDData\n", + "from astropy.stats import sigma_clipped_stats, SigmaClip\n", + "import astropy.units as u\n", + "from astropy.visualization import ImageNormalize, LogStretch, AsymmetricPercentileInterval\n", + "\n", + "import matplotlib.pyplot as plt\n", + "from matplotlib.ticker import LogLocator\n", + "\n", + "import numpy as np\n", + "\n", + "from photutils.aperture import EllipticalAperture\n", + "from photutils.background import Background2D, MeanBackground\n", + "from photutils.centroids import centroid_2dg\n", + "from photutils.detection import find_peaks, DAOStarFinder, IRAFStarFinder\n", + "from photutils.segmentation import detect_sources, make_2dgaussian_kernel, SourceCatalog\n", + "\n", + "# Show plots in the notebook\n", + "%matplotlib inline" + ] + }, + { + "cell_type": "markdown", + "id": "b0e18811-4d0e-4799-9d58-7d100ce8fc3c", + "metadata": {}, + "source": [ + "In the next cell, we read the data, mask out the edges and subtract the two dimensional background." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7fe76ad2-51c6-48fd-b91e-541a1ea9bd90", + "metadata": {}, + "outputs": [], + "source": [ + "ccd_image = CCDData.read('TIC_125489084.01-S001-R055-C001-ip.fit.bz2')\n", + "\n", + "# Create a mask array\n", + "mask = np.zeros_like(ccd_image, dtype=bool)\n", + "\n", + "# Set the mask for a border of 50 pixels around the image\n", + "mask[:50, :] = mask[-50:, :] = mask[:, :50] = mask[:, -50:] = 1.0\n", + "\n", + "ccd_image.mask = mask\n", + "\n", + "sigma_clip = SigmaClip(sigma=3., maxiters=5)\n", + "bkg_estimator = MeanBackground()\n", + "bkg = Background2D(ccd_image, box_size=200, filter_size=(9, 9), mask=ccd_image.mask,\n", + " sigma_clip=sigma_clip, bkg_estimator=bkg_estimator)\n", + "# Calculate the 2D background subtraction, maintaining metadata, unit, and mask\n", + "ccd_2d_bkgdsub = ccd_image.subtract(bkg.background)\n", + "\n", + "data_subtracted = ccd_2d_bkgdsub.data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "82f387f7-66af-4be6-bd2f-4f54f87ba6ed", + "metadata": {}, + "outputs": [], + "source": [ + "mean, median, std = sigma_clipped_stats(data_subtracted.data, sigma=3.0, maxiters=5)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2b4110f0-a867-42fb-8eac-885a4883c4d6", + "metadata": {}, + "outputs": [], + "source": [ + "plt.style.use('../photutils_notebook_style.mplstyle')" + ] + }, + { + "cell_type": "markdown", + "id": "21c15890-d5d2-4f5a-a1dd-7795e0f609a7", + "metadata": {}, + "source": [ + "Next, we use image segmentation to find the sources." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a0a6d51c-d8b2-4ec4-9a69-343585009dfc", + "metadata": {}, + "outputs": [], + "source": [ + "# Define threshold and minimum object size\n", + "threshold = 5. * std\n", + "npixels = 10\n", + "fwhm = 5\n", + "kernel_size = 5\n", + "kernel = make_2dgaussian_kernel(fwhm, size=kernel_size)\n", + "convolved_data = convolve(data_subtracted.data, kernel, mask=ccd_2d_bkgdsub.mask)\n", + "# Create a segmentation image\n", + "segm = detect_sources(convolved_data, threshold, npixels, mask=ccd_2d_bkgdsub.mask)\n", + "\n", + "print('Found {} sources'.format(segm.max_label))" + ] + }, + { + "cell_type": "markdown", + "id": "bf95f3be-038e-412c-8324-039e1333d096", + "metadata": {}, + "source": [ + "This is roughly half the number of sources detected by star finding." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c5e8d393-704b-4631-977f-4f644cbeccce", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the figure with subplots\n", + "fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(12, 6))\n", + "plt.tight_layout()\n", + "\n", + "# Plot the data\n", + "# Set up the normalization and colormap\n", + "norm_image = ImageNormalize(ccd_2d_bkgdsub, interval=AsymmetricPercentileInterval(30, 99.5))\n", + "cmap = plt.get_cmap('viridis')\n", + "cmap.set_bad('white') # Show masked data as white\n", + "\n", + "# Plot the original data\n", + "fitsplot = ax1.imshow(np.ma.masked_where(ccd_2d_bkgdsub.mask, ccd_2d_bkgdsub),\n", + " norm=norm_image, cmap=cmap)\n", + "ax1.set_ylabel('Y (pixels)')\n", + "ax1.set_title('Original Data')\n", + "\n", + "# Plot the segmentation image\n", + "segplot = ax2.imshow(np.ma.masked_where(ccd_image.mask, segm), vmin=1, cmap=segm.cmap)\n", + "ax2.set_xlabel('X (pixels)')\n", + "ax2.set_title('Segmentation Map')\n", + "\n", + "# Define the colorbar\n", + "# cbar_ax = fig.add_axes([1, 0.09, 0.03, 0.87])\n", + "# cbar = plt.colorbar(segplot, cbar_ax)\n", + "# cbar.set_label('Object Label', rotation=270, labelpad=30)" + ] + }, + { + "cell_type": "markdown", + "id": "360d72fc-fd47-4dd7-a575-97051be44405", + "metadata": {}, + "source": [ + "As in the previous notebook, we construct a [SourceCatalog](https://photutils.readthedocs.io/en/stable/api/photutils.segmentation.SourceCatalog.html#photutils.segmentation.SourceCatalog) to obtain more information about the sources detected by segmentation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "31715b6e-7cc4-47d5-be5b-b49e08740a36", + "metadata": {}, + "outputs": [], + "source": [ + "catalog = SourceCatalog(ccd_2d_bkgdsub.data, segm)\n", + "table = catalog.to_table()" + ] + }, + { + "cell_type": "markdown", + "id": "350de88c-eaa6-46d0-a964-d873d77cd297", + "metadata": {}, + "source": [ + "We can use this information to create isophotal ellipses for each identified source. These ellipses can also later be used as photometric apertures." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "71404fa0-1fd2-4f67-aa4a-4c408e0e7ddc", + "metadata": {}, + "outputs": [], + "source": [ + "# Define the approximate isophotal extent\n", + "r = 8. # pixels\n", + "\n", + "# Create the apertures\n", + "apertures = []\n", + "for obj in catalog:\n", + " position = (obj.xcentroid, obj.ycentroid)\n", + " a = obj.semimajor_sigma.value * r\n", + " b = obj.semiminor_sigma.value * r\n", + " theta = obj.orientation\n", + " apertures.append(EllipticalAperture(position, a, b, theta=theta))" + ] + }, + { + "cell_type": "markdown", + "id": "e9bb56ee-c640-4c69-976e-e6f2e3ef536c", + "metadata": {}, + "source": [ + "### Sources detected by image segmentation\n", + "\n", + "All of the bright sources are detected by image segmentation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8750e2c1-b13f-4b78-9026-59e9efe31122", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the figure with subplots\n", + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "# Plot the data\n", + "fitsplot = ax1.imshow(np.ma.masked_where(ccd_2d_bkgdsub.mask, ccd_2d_bkgdsub),\n", + " norm=norm_image, cmap=cmap)\n", + "\n", + "# Plot the apertures\n", + "for aperture in apertures:\n", + " aperture.plot(color='red', lw=1, alpha=0.7, axes=ax1)\n", + "\n", + "# Define the colorbar\n", + "cbar = plt.colorbar(fitsplot, fraction=0.046, pad=0.04, ticks=LogLocator())\n", + "\n", + "def format_colorbar(bar):\n", + " # Add minor tickmarks\n", + " bar.ax.yaxis.set_minor_locator(LogLocator(subs=range(1, 10)))\n", + "\n", + " # Force the labels to be displayed as powers of ten and only at exact powers of ten\n", + " bar.ax.set_yticks([1e-4, 1e-3, 1e-2])\n", + " labels = [f'$10^{{{pow:.0f}}}$' for pow in np.log10(bar.ax.get_yticks())]\n", + " bar.ax.set_yticklabels(labels)\n", + "\n", + "format_colorbar(cbar)\n", + "\n", + "# Define labels\n", + "cbar.set_label(r'Flux Count Rate ({})'.format(ccd_image.unit.to_string('latex')),\n", + " rotation=270, labelpad=30)\n", + "ax1.set_xlabel('X (pixels)')\n", + "ax1.set_ylabel('Y (pixels)')\n", + "ax1.set_title('Segmentation Image Apertures');" + ] + }, + { + "cell_type": "markdown", + "id": "0809e5da-0489-4a4f-98cd-97f7900a3e87", + "metadata": {}, + "source": [ + "## Comparison with other detection methods\n", + "\n", + "We rerun peak finding and star finding on this image so that we can compare the methods" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b1d61489-3595-46b9-aaaf-57f11ac2126d", + "metadata": {}, + "outputs": [], + "source": [ + "with warnings.catch_warnings(action=\"ignore\"):\n", + " sources_findpeaks = find_peaks(ccd_2d_bkgdsub.data - median, mask=ccd_2d_bkgdsub.mask,\n", + " threshold=threshold, box_size=29,\n", + " centroid_func=centroid_2dg)\n", + "\n", + "# Add an ID column to the find_peaks result\n", + "sources_findpeaks[\"source_id\"] = np.arange(len(sources_findpeaks))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "19c62f54-3040-4ec8-9c12-f02d271e3ac4", + "metadata": {}, + "outputs": [], + "source": [ + "daofind = DAOStarFinder(fwhm=1.5 * fwhm, threshold=threshold)\n", + "sources_dao = daofind(np.ma.masked_where(ccd_image.mask, ccd_image), mask=ccd_2d_bkgdsub.mask)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f3551192-ea06-452e-a5ab-57c322e8c4fc", + "metadata": {}, + "outputs": [], + "source": [ + "# Set up the figure with subplots\n", + "fig, ax1 = plt.subplots(1, 1, figsize=(8, 8))\n", + "\n", + "# Plot the data\n", + "fitsplot = ax1.imshow(np.ma.masked_where(ccd_2d_bkgdsub.mask, ccd_2d_bkgdsub),\n", + " norm=norm_image, cmap=cmap)\n", + "\n", + "# Plot the apertures\n", + "apertures[0].plot(color='red', lw=2, alpha=1, axes=ax1, label=\"Image segmentation\")\n", + "for aperture in apertures[1:]:\n", + " aperture.plot(color='red', lw=2, alpha=1, axes=ax1)\n", + "\n", + "# Add find_peaks sources\n", + "ax1.scatter(sources_findpeaks[\"x_centroid\"], sources_findpeaks[\"y_centroid\"], \n", + " marker=\"x\", color=\"orange\", s=200, lw=2, alpha=0.7, label=\"find_peaks\")\n", + "\n", + "# Add find_peaks sources\n", + "ax1.scatter(sources_dao[\"xcentroid\"], sources_dao[\"ycentroid\"], \n", + " marker=\"s\", facecolor=\"none\", edgecolor=\"cyan\", s=200, alpha=0.5, label=\"DAOFind\")\n", + "\n", + "top = 1125\n", + "left = 2350\n", + "cutout_size = 500\n", + "\n", + "# ax1.set_xlim(left, left + cutout_size)\n", + "# ax1.set_ylim(top, top + cutout_size)\n", + "\n", + "ax1.legend(ncols=3, loc='lower center', bbox_to_anchor=(0.5, 1))\n", + "\n", + "ax1.set_title('Source detection method comparison', y=1.05);" + ] + }, + { + "cell_type": "markdown", + "id": "c0b2a1b2-57c0-49a5-8924-5372cd43613c", + "metadata": {}, + "source": [ + "## Summary\n", + "\n", + "The image above nicely captures the differences between the source detection methods we have discussed. [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) does an excellent job detecting the star-like sources in the image. [`find_peaks`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.find_peaks.html#photutils.detection.find_peaks) detects almost everything that [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder) does but also has many false identifications. Image segmentation does an excellent job detecting sources though it misses some of the fainter sources found by [`DAOStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.DAOStarFinder.html#photutils.detection.DAOStarFinder). This could likely be fixed by adjusting the settings for image segmentation.\n", + "\n", + "The choice of source detection method in a case like may come down to whether you have additional constraints you want to impose. Recall that the [`IRAFStarFinder`](https://photutils.readthedocs.io/en/stable/api/photutils.detection.IRAFStarFinder.html#photutils.detection.IRAFStarFinder) allows to to provide a minimum separation of sources, for example. For performing aperture photometry that is useful since there is no way to separate sources with that technique." + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}