Home » Exploratory Data Analysis: Gamma Spectroscopy in Python (Part 2)

Exploratory Data Analysis: Gamma Spectroscopy in Python (Part 2)

part, I did an exploratory data analysis of the gamma spectroscopy data. We were able to see that using a modern scintillation detector, we can not only see that the object is radioactive. With a gamma spectrum, we are also able to tell why it is radioactive and what kind of isotopes the object contains.

In this part, we will go further, and I will show how to make and train a machine learning model for detecting radioactive elements.

Before we begin, an important warning. All data files collected for this article are available on Kaggle, and readers can train and test their ML models without having real hardware. If you want to test real objects, do it at your own risk. I did my tests with sources that can be legally found and purchased, like vintage uranium glass or old watches with radium dial paint. Please check your local laws and read safety guidelines about handling radioactive materials. Sources used in this test are not seriously dangerous, but still must be handled with care!

Now, let’s get started! I will show how to collect the data, train the model, and run it using a Radiacode scintillation detector. For those readers who do not have Radiacode hardware, the link to the datasource is added at the end of the article.

Methodology

This article will contain several parts:

  1. I will briefly explain what a gamma spectrum is and how we can use it.
  2. We will collect the data for our ML model. I will show the code for collecting the spectra using the Radiacode device.
  3. We will train the model and control its accuracy.
  4. Finally, I will make an HTMX-based web frontend for the model, and we will see the results in real-time.

Let’s get into it!

1. Gamma Spectrum

This is a short recap of the first part, and for more details, I highly recommend reading it first.

Why is the gamma spectrum so interesting? Some objects around us can be slightly radioactive. Its sources vary from the naturally occurring radiation of granite in the buildings to the radium in some vintage watches or the thorium in modern thoriated tungsten rods. A Geiger counter only shows us the number of radioactive particles that were detected. A scintillation detector shows us not only the number of particles but also their energies. This is a crucial difference—it turned out that different radioactive materials emit gamma rays with different energies, and each material has its own “footprint.”

As a first example, I bought this pendant in the Chinese shop:

Image by author

It was advertised as an “ion-generating,” so I already suspected that the pendant could be slightly radioactive (an ionizing radiation, as its name suggests, can produce ions). Indeed, as we can see on the meter screen, its radioactivity level is about 1,20 µSv/h, which is 12 times higher than the background (0,1 µSv/h). It is not crazy high and comparable to a level on an airplane during the flight, but it is still statistically significant 😉

However, by only observing the value, we cannot tell why the object is radioactive. A gamma spectrum will show us what isotopes are inside the object:

Image by author

In this example, the pendant contains thorium-232, and a thorium decay chain produces radium and actinium. As we can see on the graph, the actinium-228 peak is well visible on the spectrum.

As a second example, let’s say we have found this piece of rock:

Image source Wikipedia

This is uraninite, a mineral that contains a lot of uranium dioxide. Such specimens can be found in some areas of Germany, the Czech Republic, or the US. If we get it in the mineral shop, it probably has a label on it. But in the field, it’s usually not the case 😉 With a gamma spectrum, we can see an image like this:

Image by author

By comparing the peaks with known isotopes, we can tell that the rock contains uranium, but, for example, not thorium.

A physical explanation of the gamma spectrum is also fascinating. As we can see on the graph below, gamma rays are actually photons and belong to the same spectrum as visible light:

Electromagnetic spectrum, Image source Wikipedia

When some people think that radioactive objects are glowing in the dark, it’s actually true! Every radioactive material is indeed glowing with its own unique “color,” but in the very far and non-visible to the human eye part of the spectrum.

A second fascinating thing is that only 10-20 years ago, gamma-spectroscopy was available only for institutions and big labs (in the best case, some used crystals with unknown quality could be found on eBay). Nowadays, due to advancements in electronics, a scintillation detector can be purchased for the price of a mid-range smartphone.

Now, let’s return to our project. As we can see from the two examples above, the spectra of different objects are different. Let’s create a machine learning model that can automatically detect various elements.

2. Collecting the Data

As readers can guess, our first challenge is collecting the samples. I am not a nuclear institution, and I don’t have access to the calibrated test sources like cesium or strontium. However, for our task, it is not required, and some materials can be legally found and purchased. For example, americium is still used in smoke detectors; radium was used in painting the watch dials before the 1960s; uranium was widely used in glass manufacturing before the 1950s, and thoriated tungsten rods are still produced today and can be purchased from Amazon. Even the natural uranium ore can be purchased in the mineral shops; however, it requires a bit more safety precautions. And a good thing about gamma-spectroscopy is that we don’t need to disassemble or break the items, and the process is generally safe.

The second challenge is collecting the data. If you work in e-commerce, then it’s usually not a problem, and every SQL request will return millions of records. Alas, in the “real world,” it can be much more challenging. Especially if you want to make a database of the radioactive materials. In our case, collecting every spectrum requires 10-20 minutes. For every test object, it would be nice to have at least 10 records. As we can see, the process can take hours, and having millions of records is not a realistic option.

For getting the spectrum data, I will be using a Radiacode 103G scintillation detector and an open-source radiacode library.

Radiacode detector, Image by author

A gamma spectrum can be exported in XML format using the official Radiacode Android app, but the manual process is too slow and tedious. Instead, I created a Python script that collects the spectra using random time intervals:

from radiacode import RadiaCode, RawData, Spectrum


def read_forever(rc: RadiaCode):
    """ Read data from the device """
    while True:
        interval_sec = random.randint(10*60, 30*60)
        read_spectrum(rc, interval_sec)

def read_spectrum(rc: RadiaCode, interval: int):
    """ Read and save spectrum """
    rc.spectrum_reset()

    # Read
    dt = datetime.datetime.now()
    filename = dt.strftime("spectrum-%Y%m%d%H%M%S.json")
    logging.debug(f"Making spectrum for {interval // 60} min")

    # Wait
    t_start = time.monotonic()
    while time.monotonic() - t_start < interval:
        show_device_data(rc)
        time.sleep(0.4)

    # Save
    spectrum: Spectrum = rc.spectrum()
    spectrum_save(spectrum, filename)

def show_device_data(rc: RadiaCode):
    """ Get CPS (counts per second) values """
    data = rc.data_buf()
    for record in data:
        if isinstance(record, RawData):
            log_str = f"CPS: {int(record.count_rate)}"
            logging.debug(log_str)

def spectrum_save(spectrum: Spectrum, filename: str):
    """ Save  spectrum data to log """
    duration_sec = spectrum.duration.total_seconds()
    data = {
            "a0": spectrum.a0,
            "a1": spectrum.a1,
            "a2": spectrum.a2,
            "counts": spectrum.counts,
            "duration": duration_sec,
    }
    with open(filename, "w") as f_out:
        json.dump(data, f_out, indent=4)
        logging.debug(f"File '{filename}' saved")


rc = RadiaCode()
app.read_forever()

Some error handling is omitted here for clarity reasons. A link to the full source code can be found at the end of the article.

As we can see, I randomly select the time between 10 and 30 minutes, collect the gamma spectrum data, and save it to a JSON file. Now, I only need to place a Radiacode detector near the object and leave the script running for several hours. As a result, 10-20 JSON files will be saved. I also need to repeat the process for every sample I have. As a final output, 100-200 files can be collected. It’s still not millions, but as we will see, it is enough for our task.

3. Training the Model

When the data from the previous step is ready, we can start training the model. As a reminder, all files are available on Kaggle, and readers are welcome to make their own models as well.

First, let’s preprocess the data and extract the features we want to use.

3.1 Data Load

When the data is collected, we should have some spectrum files saved in JSON format. An individual file looks like this:

{
    "a0": 24.524023056030273,
    "a1": 2.2699732780456543,
    "a2": 0.0004327862989157,
    "counts": [ 48, 52, , ..., 0, 35],
    "duration": 1364.0
}

Here, the “counts” array is the actual spectrum data. Different detectors may have different formats; a Radiacode returns the data in the form of a 1024-channel array. Calibration constants [a0, a1, a2] allow us to convert the channel number into the energy in keV (kiloelectronvolt).

First, let’s make a method to load the spectrum from a file:

@dataclass
class Spectrum:
    """ Radiation spectrum measurement data """

    duration: int
    a0: float
    a1: float
    a2: float
    counts: list[int]

    def channel_to_energy(self, ch: int) -> float:
        """ Convert channel number to the energy level """
        return self.a0 + self.a1 * ch + self.a2 * ch**2

    def energy_to_channel(self, e: float):
        """ Convert energy to the channel number (inverse E = a0 + a1*C + a2 C^2) """
        c = self.a0 - e
        return int(
            (np.sqrt(self.a1**2 - 4 * self.a2 * c) - self.a1) / (2 * self.a2)
        )


def load_spectrum_json(filename: str) -> Spectrum:
    """ Load spectrum from a json file """
    with open(filename) as f_in:
        data = json.load(f_in)
        return Spectrum(
            a0=data["a0"], a1=data["a1"], a2=data["a2"],
            counts=data["counts"],
            duration=int(data["duration"]),
        )

Now, we can draw it with Matplotlib:

import matplotlib.pyplot as plt

def draw_simple_spectrum(spectrum: Spectrum, title: Optional[str] = None):
    """ Draw spectrum obtained from the Radiacode """
    fig, ax = plt.subplots(figsize=(12, 3))
    ax.spines["top"].set_color("lightgray")
    ax.spines["right"].set_color("lightgray")
    counts = spectrum.counts
    energy = [spectrum.channel_to_energy(x) for x in range(len(counts))]
    # Bars
    ax.bar(energy, counts, width=3.0, label="Counts")
    # X values
    ticks_x = [
       spectrum.channel_to_energy(ch) for ch in range(0, len(counts), len(counts) // 20)
    ]
    labels_x = [f"{ch:.1f}" for ch in ticks_x]
    ax.set_xticks(ticks_x, labels=labels_x)
    ax.set_xlim(energy[0], energy[-1])
    plt.ylim(0, None)
    title_str = "Gamma-spectrum" if title is None else title
    ax.set_title(title_str)
    ax.set_xlabel("Energy, keV")
    plt.legend()
    fig.tight_layout()


sp = load_spectrum_json("thorium-20250617012217.json")
draw_simple_spectrum(sp)

The output looks like this:

Thorium spectrum, image by author

What can we see here?

As was mentioned before, from a standard Geiger counter, we can get only the number of detected particles. It tells us if the object is radioactive or not, but not more. From a scintillation detector, we can get the number of particles grouped by their energies, which is practically a ready-to-use histogram! A radioactive decay itself is random, so the longer the collection time, the “smoother” the graph.

3.2 Data Transform

3.2.1 Normalization
Let’s look at the spectrum again:

Here, the data was collected for about 10 minutes, and the vertical axis contains the number of detected particles. This approach has a simple problem: the number of particles is not a constant. It depends on both the collection time and the “strength” of the source. It means that we may not have 600 particles like on this graph, but 60 or 6000. We can also see that the data is a bit noisy. This is especially visible with a “weak” source and a short collection time.

To eliminate these issues, I decided to use a two-step pipeline. First, I applied the Savitzky-Golay filter to reduce the noise:

from scipy.signal import savgol_filter

def smooth_data(data: np.array) -> np.array:
    """ Apply 1D smoothing filter to the data array """
    window_size = 10
    data_out = savgol_filter(
        data,
        window_length=window_size,
        polyorder=2,
    )
    return np.clip(data_out, a_min=0, a_max=None)

It is especially useful for spectra with short collection times, where the peaks are not so well visible.

Second, I normalized a NumPy array to 0..1 by simply dividing its values by the maximum.

A final “normalize” method looks like this:

def normalize(spectrum: Spectrum) -> Spectrum:
    """ Normalize data to the vertical range of 0..1 """
    # Smooth data
    counts = np.array(spectrum.counts).astype(np.float64)
    counts = smooth_data(counts)

    # Normalize
    val_norm = counts.max()
    return Spectrum(
        duration=spectrum.duration,
        a0 = spectrum.a0,
        a1 = spectrum.a1,
        a2 = spectrum.a2,
        counts = counts/val_norm
    )

As a result, spectra from different sources now have a similar scale:

Image by author

As we can also see, the difference between the two samples is quite visible.

3.2.2 Data Augmentation
Technically, we are ready to train the model. However, as we saw in the “Collecting the data” part, the dataset is pretty small – I may have only 100-200 files in total. The solution is to augment the data by adding more synthetic samples.

As a simple approach, I decided to add some noise to the original spectra. But how much noise should we add? I selected a 680 keV channel as a reference value, because this part has no interesting isotopes. Then I added a noise with 50% of the amplitude of that channel. A np.clip call guarantees that the data values are not negative (for the amount of detected particles, it does not make physical sense).

def add_noise(spectrum: Spectrum) -> Spectrum:
    """ Add random noise to the spectrum """
    counts = np.array(spectrum.counts)    
    ch_empty = spectrum.energy_to_channel(680.0)
    val_norm = counts[ch_empty]

    ampl = val_norm / 2
    noise = np.random.normal(0, ampl, counts.shape)
    data_out = np.clip(counts + noise, min=0)
    return Spectrum(
        duration=spectrum.duration,
        a0 = spectrum.a0,
        a1 = spectrum.a1,
        a2 = spectrum.a2,
        counts = data_out
    )

sp = load_spectrum_json("thorium-20250617012217.json")
sp = add_noise(normalize(sp))
draw_simple_spectrum(sp, filename)

The output looks like this:

Image by author

As we can see, the noise level is not that big, so it does not distort the peaks. At the same time, it adds some diversity to the data.

A more sophisticated approach can also be used. For example, some radioactive minerals contain thorium, uranium, or potassium in different proportions. It may be possible to combine spectra of existing samples to get some “new” ones.

3.2.3 Feature Extraction
Technically, we can use all 1024 values “as is” as an input for our ML model. However, this approach has two problems:

  • First, it is redundant – we are mostly interested only in particular isotopes. For example, on the last graph, there is a good visible peak at 238 keV, which belongs to Lead-212, and a less visible peak at 338 keV, which belongs to Actinium-228.
  • Second, it is device-specific. I want a model to be universal. Using only the energies of the selected isotopes as input allows us to use any gamma spectrometer model.

Finally, I created this list of isotopes:

isotopes = [ 
    # Americium
    ("Am-241", 59.5),
    # Potassium
    ("K-40", 1460.0),
    # Radium
    ("Ra-226", 186.2),
    ("Pb-214", 242.0),
    ("Pb-214", 295.2),
    ("Pb-214", 351.9),
    ("Bi-214", 609.3),
    ("Bi-214", 1120.3),
    ("Bi-214", 1764.5),
    # Thorium
    ("Pb-212", 238.6),
    ("Ac-228", 338.2),
    ("TI-208", 583.2),
    ("AC-228", 911.2),
    ("AC-228", 969.0),
    # Uranium
    ("Th-234", 63.3),
    ("Th-231", 84.2),
    ("Th-234", 92.4),
    ("Th-234", 92.8),
    ("U-235", 143.8),
    ("U-235", 185.7),
    ("U-235", 205.3),
    ("Pa-234m", 766.4),
    ("Pa-234m", 1000.9),
]

def isotopes_save(filename: str):
    """ Save isotopes list to a file """
    with open(filename, "w") as f_out:
        json.dump(isotopes, f_out)

Only spectrum values for these isotopes will be used as input for the model. I also created a method to save a list into the JSON file – it will be used to load the model later. Some isotopes, like Uranium-235, may be present in minuscule amounts and not be practically detectable. Readers are welcome to improve the list on their own.

Now, let’s create a method that converts a Radiacode spectrum to a list of features:

def get_features(spectrum: Spectrum, isotopes: List) -> np.array:
    """ Extract features from the spectrum """
    energies = [energy for _, energy in isotopes]
    data = [spectrum.counts[spectrum.energy_to_channel(energy)] for energy in energies]
    return np.array(data)

Practically, we converted the list of 1024 values to a NumPy array with only 23 elements, which is a good size reduction!

3.3 Training

Finally, we are ready to train the ML model.

First, let’s combine all files into one dataset. Practically, it depends on the samples you have and may look like this:

all_files = [
    ("Americium", glob.glob("../data/train/americium*.json")),
    ("Radium", glob.glob("../data/train/radium*.json")),
    ("Thorium", glob.glob("../data/train/thorium*.json")),
    ("Uranium Glass", glob.glob("../data/train/uraniumGlass*.json")),
    ("Uranium Glaze", glob.glob("../data/train/uraniumGlaze*.json")),
    ("Uraninite", glob.glob("../data/train/uraninite*.json")),
    ("Background", glob.glob("../data/train/background*.json")),
]

def prepare_data(augmentation: int) -> Tuple[np.array, np.array]:
    """ Prepare data for training """
    x, y = [], []
    for name, files in all_files:
        for filename in files:
            print(f"Processing {filename}...")
            sp = normalize(load_spectrum(filename))
            for _ in range(augmentation):
                sp_out = add_noise(sp)
                x.append(get_features(sp_out, isotopes))
                y.append(name)

    return np.array(x), np.array(y)


X_train, y_train = prepare_data(augmentation=10)

As we can see, our y-values contain names like “Americium.” I will use a LabelEncoder to convert them into numeric values:

from sklearn.preprocessing import LabelEncoder


le = LabelEncoder()
le.fit(y_train)
y_train = le.transform(y_train)

print("X_train:", X_train.shape)
#> (1900, 23)

print("y_train:", y_train.shape)
#> (1900,)

I decided to use an open-source XGBoost model, which is based on gradient tree boosting (original paper link). I will also use a GridSearchCV to find optimal parameters:

from xgboost import XGBClassifier
from sklearn.model_selection import GridSearchCV


bst = XGBClassifier(n_estimators=10, max_depth=2, learning_rate=1)
clf = GridSearchCV(
    bst,
    {
        "max_depth": [1, 2, 3, 4],
        "n_estimators": range(2, 20),
        "learning_rate": [0.001, 0.01, 0.1, 1.0, 10.0]
    },
    verbose=1,
    n_jobs=1,
    cv=3,
)
clf.fit(X_train, y_train)

print("best_score:", clf.best_score_)
#> best_score: 0.99474

print("best_params:", clf.best_params_)
#> best_params: {'learning_rate': 1.0, 'max_depth': 1, 'n_estimators': 9}

Last but not least, I need to save the trained model:

isotopes_save("../models/V1/isotopes.json")
bst.save_model("../models/V1/XGBClassifier.json")
np.save("../models/V1/LabelEncoder.npy", le.classes_)

Obviously, we need not only the model itself but also the list of isotopes and labels. If we change something, the data will not match anymore, and the model will produce garbage, so model versioning is our friend!

To verify the results, I need data that the model did not “see” before. I already collected several XML files using the Radiacode Android app, and just for fun, I decided to use them for testing.

First, I created a method to load the data:

import xmltodict

def load_spectrum_xml(file_path: str) -> Spectrum:
    """ Load the spectrum from a Radiacode Android app file """
    with open(file_path) as f_in:
        doc = xmltodict.parse(f_in.read())
        result = doc["ResultDataFile"]["ResultDataList"]["ResultData"]
        spectrum = result["EnergySpectrum"]
        cal = spectrum["EnergyCalibration"]["Coefficients"]["Coefficient"]
        a0, a1, a2 = float(cal[0]), float(cal[1]), float(cal[2])
        duration = int(spectrum["MeasurementTime"])
        data = spectrum["Spectrum"]["DataPoint"]
        return Spectrum(
            duration=duration,
            a0=a0, a1=a1, a2=a2,
            counts=[int(x) for x in data],
        )

It has the same spectra values that I used in the JSON files, with some extra data that is not required for our task.

Practically, this is an example of data collection. This Victorian creamer from the 1890s is 130 years old, and trust me, you cannot get this data by using an SQL request 🙂

Image by author

This uranium glass is slightly radioactive (the background level is about 0,08 µSv/h), but it’s at a safe level and cannot produce any harm.

The test code itself is simple:

# Load model
bst = XGBClassifier()
bst.load_model("../models/V1/XGBClassifier.json")
isotopes = isotopes_load("../models/V1/isotopes.json")
le = LabelEncoder()
le.classes_ = np.load("../models/V1/LabelEncoder.npy")

# Load data
test_data = [
    ["../data/test/background1.xml", "../data/test/background2.xml"],
    ["../data/test/thorium1.xml", "../data/test/thorium2.xml"],
    ["../data/test/uraniumGlass1.xml", "../data/test/uraniumGlass2.xml"],
    ...
]

# Predict
for group in test_data:
    data = []
    for filename in group:
        spectrum = load_spectrum(filename)
        features = get_features(normalize(spectrum), isotopes)
        data.append(features)

    X_test = np.array(data)
    preds = bst.predict(X_test)
    preds = le.inverse_transform(preds)
    print(preds)

#> ['Background' 'Background']
#> ['Thorium' 'Thorium']
#> ['Uranium Glass' 'Uranium Glass']
#> ...

Here, I also grouped the values from different samples and used batch prediction.

As we can see, all results are correct. I was also going to make a confusion matrix, but at least for my relatively small number of samples, all objects were detected properly.

4. Testing

As a final part of this article, let’s use the model in real-time with a Radiacode device.

The code is almost the same as at the beginning of the article, so I’ll show only the crucial parts. Using the radiacode library, I connect to the device, read the spectra once per minute, and use these values to predict the isotopes:

from radiacode import RadiaCode, RealTimeData, Spectrum
import logging


le = LabelEncoder()
le.classes_ = np.load("../models/V1/LabelEncoder.npy")
isotopes = isotopes_load("../models/V1/isotopes.json")
bst = XGBClassifier()
bst.load_model("../models/V1/XGBClassifier.json")


def read_spectrum(rc: RadiaCode):
    """ Read spectrum data """
    spectrum: Spectrum = rc.spectrum()
    logging.debug(f"Spectrum: {spectrum.duration} collection time")
    result = predict_spectrum(spectrum)
    logging.debug(f"Predict: {result}")

def predict_spectrum(sp: Spectrum) -> str:
    """ Predict the isotope from a spectrum """
    features = get_features(normalize(sp), isotopes)
    preds = bst.predict([features])
    return le.inverse_transform(preds)[0]

def read_cps(rc: RadiaCode):
    """ Read CPS (counts per second) values """
    data = rc.data_buf()
    for record in data:
        if isinstance(record, RealTimeData):
             logging.debug(f"CPS: {record.count_rate:.2f}")


if __name__ == '__main__':
    logging.basicConfig(
        level=logging.DEBUG, format="[%(asctime)-15s] %(message)s",
        datefmt="%Y-%m-%d %H:%M:%S"
    )

    rc = RadiaCode()
    logging.debug(f"ML model loaded")
    fw_version = rc.fw_version()
    logging.debug(f"Device connected:, firmware {fw_version[1]}")
    rc.spectrum_reset()
    while True:
        for _ in range(12):
            read_cps(rc)
            time.sleep(5.0)

        read_spectrum(rc)

Here, I read the CPS (counts per second) values from the Radiacode every 5 seconds, just to be sure that the device works. Every minute, I read the spectrum and use it with the model.

Before running the app, I placed the Radiacode detector near the object:

Image by author

This vintage watch was made in the 1950s, and it has radium paint on the digits. Its radiation level is ~5 times the background, but it is still within a safe level (and it is actually 2 times lower than everyone gets in an airplane during a flight).

Now, we can run the code and see the results in real-time:

As we can see, the model’s prediction is correct.

Readers who don’t have a Radiacode hardware can use raw log files to replay the data. The link is added to the end of the article.

Conclusion

In this article, I explained the process of creating a machine learning model for predicting radioactive isotopes. I also tested the model with some radioactive samples that can be legally purchased.

I also did an interactive HTMX frontend for the model, but this article is already too long. If there is a public interest in this topic, this will be published in the next part.

As for the model itself, there are several ways for improvement:

  • Adding more data samples and isotopes. I’m not a nuclear institution, and my choice (from not only financial or legal perspectives, but also considering the free space in my apartment) is limited. Readers who have access to other isotopes and minerals are welcome to share their data, and I will try to add it to the model.
  • Adding more features. In this model, I normalized all spectra, and it works well. However, in this way, we lose the information about the radioactivity level of the objects. For example, the uranium glass has a much lower radiation level compared to the uranium ore. To distinguish these objects more effectively, we can add the radioactivity level as an additional model feature.
  • Testing other model types. It looks promising to use a vector search to find the closest embeddings. It can also be more interpretable, and the model can show several closest isotopes. A library like FAISS can be useful for that. Another way is to use a deep learning model, which can also be interesting to test.

In this article, I used a Radiacode radiation detector. It is a nice device that allows making some interesting experiments (disclaimer: I don’t have any profit or other commercial interest from its sales). For those readers who don’t have a Radiacode hardware, all collected data is freely available on Kaggle.

The full source code for this article is available on my Patreon page. This support helps me to buy equipment or electronics for future tests. And readers are also welcome to connect via LinkedIn, where I periodically publish smaller posts that are not big enough for a full article.

Thanks for reading.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *