-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using kimimaro for voxels skeletonization #81
Comments
Hi ZiYuanZiWan, Thanks for writing in. I think I do not understand exactly what your issue is. Are you unable to extract a skeleton or are you having trouble joining together skeletons that have been extracted? Have you generated Kimimaro skeletons and were they of sufficient quality? Note that TEASAR skeletons are trees and so cannot represent loops. Will |
I am very sorry that I did not provide any specific doubts after describing the problem. Now, I am stating as follows:
The main problem is that I have a serious problem in importing data. I cannot provide the original data in image format. Can I have the opportunity to use Kimimaro? I hope your profound knowledge can guide me in completing this task. I know it's not difficult for you, but for me, it's quite challenging. Once again, I would like to express my sincere gratitude to you. |
I see, so you are still having trouble rendering a numpy image? Your code for that small example worked for me. Would you be able to share your data? I might be working on a similar problem right now in terms of sparse points. If you're working with a mesh, the problem is that you'll need to fill in the image volumetricly after rendering the border. In terms of simplifying the resulting skeleton, the |
Yes, I still cannot obtain an excellent numpy image. |
Looking forward to seeing your data! Hope you got a good nights' sleep!
In the code you provided, there was a variable named "mesh." If that referred to a 3D triangle mesh (or similar), then you would need to fill in the voxels that are inside the mesh as well as the boundary. However, there may be other interpretations of the word mesh, such as a grid, in which case this point may not be applicable. |
Thank you for your patience. It took me a significant amount of time to package my raw data and provide as much information as possible. Allow me to explain the purpose of each file in the zip folder[file.zip]. (1) testodb.odb: This is the most original database file type provided by the Abaqus software I use. It stores voxel information, also known as finite element elements, along with additional mechanical analysis results. However, considering that you may have difficulty opening or finding meaningful data in this file, I have extracted the precise coordinates of the eight corner points of the 3D cube mesh for you. These coordinates are named (2) voxels_coordinates.txt. Due to limitations in the numpy library, I have saved them in a 1D format, where each set of eight consecutive coordinates represents a 3D cube mesh. (3) vox_ext.py: If you happen to have the Abaqus software, you can directly paste this code snippet into the command stream to obtain the voxels_coordinates.txt file and the unprocessed standard 8D numpy array. (4-5) There is a static image and a dynamic image included in the zip folder, which showcase the original state of these voxels. You can envision them as a structure built with square-shaped blocks, similar to building with interlocking cubes. Additionally, I would like to clarify that when I refer to "mesh," I specifically mean the 3D cube mesh. The Abaqus software also has a 3D triangle mesh available, but it is not commonly used. Currently, the bottleneck I face is whether I can convert these 3D cube meshes into standard numpy images, as this will determine my ability to use kimimaro for my work. |
Now, I respectfully wish to provide you with a demonstration using a another dataset[coord_1308points_wall.txt] to showcase the skeleton I have generated. I have made modifications to the provided code, incorporating visualization using the skimage library in conjunction with the matplotlib library. The skeleton algorithm utilized by the skimage library is [T.-C. Lee, R.L. Kashyap and C.-N. Chu, Building skeleton models via 3-D medial surface/axis thinning algorithms. Computer Vision, Graphics, and Image Processing, 56(6):462-478, 1994.]. Upon executing this code, you will be able to observe the comparison between the resulting 'Skeleton' type and the original data labeled as 'Original Data'. import numpy as np
from skimage import morphology
import matplotlib.pyplot as plt
def ske(mesh, x, y, z, openfile):
me = int(mesh)
depth = int(x//me)
height = int(y//me)
width = int(z//me)
voxels = np.zeros((depth, height, width), dtype=np.uint8)
with open(openfile, 'r') as file:
centers = [tuple(map(int, line.split())) for line in file]
print(len(centers))
for center in centers:
z = center[0] // me
y = center[2] // me
x = center[1] // me
voxels[z, y, x] = 255
print(voxels)
binary_array = voxels
skeleton = morphology.skeletonize_3d(binary_array)
fig = plt.figure(figsize=(10, 5))
ax1 = fig.add_subplot(121, projection='3d')
ax1.voxels(binary_array, facecolors='b', edgecolor='k')
ax1.set_title('Original Data')
ax2 = fig.add_subplot(122, projection='3d')
ax2.voxels(skeleton, facecolors='r', edgecolor='k')
ax2.set_title('Skeleton')
plt.tight_layout()
plt.show()
return voxels, skeleton
v, st = ske(100, 2000, 2000, 2000, 'coord_1308points_wall.txt') This is the second difficulty I encountered. Apart from being unable to obtain a good numpy image, I am unable to apply the end points operation I mentioned earlier to the already processed skeleton. |
“However, [voxels_coordinates.txt] seems to be potentially corrupted?” Hello, [voxels_coordinates.txt] is not corrupted, and its data is recorded differently from [coord_1308points_wall.txt]. The [voxels_coordinates.txt] takes a more detailed approach, where every eight coordinate points represent a 3D cube mesh (for example, coordinates 0-7 mean the 0th 3D cube mesh, coordinates 8-15 mean the first 3D cube mesh... and so on), directly recording the eight vertices of the 3D cube mesh. There are a total of 2400 3D coordinates in the [voxels_coordinates.txt], 2400 / 8 = 300, which means there are 300 3D cube meshes. I am not sure which method is more suitable for constructing numpy images, so I have provided you with two formats. |
Oh, in that case, you can use Kimimaro directly with the numpy image generated by my code for |
Thank you for providing the code. Currently, I have successfully used kimimaro for skeleton extraction, but I found that the effect seems to be not very good, which should be related to the two parameters (scale and const). And I have a small idea that I can use skeletons that have already been extracted from other Python libraries as data sources and use kimimaro for endpoint extraction and skeleton simplification. However, I don't seem to have found the usage of [. terminals() and. downsample (factor)]. Do you have a help manual or could you please provide me with a demonstration of the code directly? Anyway, I would like to thank you very much. After my work is completed, I will place kimimaro in the most prominent position of the Acknowledge. |
You can try using much smaller scale and const. Maybe scale=1 and const=1 (assuming you provide anisotropy=(1,1,1))? The shape is very thin. Try playing around with it. skels = kimimaro.skeletonize(...)
skel = skels[1] # since its a binary image
terminals = skel.terminals()
ds_skel = skel.downsample(2) # downsample factor of 2 I hope that helps! Thanks for the acknowledgement (assuming you are successful)! |
Dear Seung lab:
I apologize for asking a question that may not be directly related to your expertise. Despite searching through numerous research papers on Google Scholar, I am still unable to complete my work. As a novice in learning Python for only two months, there are many foundational papers on skeletonization algorithms, but I am struggling to translate the concepts from the papers into actual code. Most skeletonization algorithms have been extended to fields like medicine and botany, but there are scarce Python libraries available to assist me in completing my work.
Here is my problem: I have included a screenshot of a commercial finite element software called "Abaqus". As you can see, this structure resembles a topology structure. Since this software is designed for computations, it does not provide various images like those available in medical instruments. Currently, using the Python interface of this software, I have written snippets of code to obtain the center coordinates of each voxel in this structure (in the software, each voxel has a fixed value, and it is a regular cube with equal length, width, and height). Additionally, I have obtained the size of the voxels (e.g., 30.). I have tried several Python libraries, most of which require images as input, while only a few accept numpy arrays. I have written some immature code to extract a binary array from these center coordinates, which would allow me to apply skeletonization algorithms using some Python libraries.
coord_26_cube.txt
Here, I have attached a simple test file that can demonstrate the effectiveness of my code under normal conditions. The file represents a 90x90x90 cube with voxel size 30. It has been subdivided into 27 parts, with one small cube removed, resulting in 26 remaining small cubes. The coordinates in the file represent the positions of the center points.
Although I achieved some good results in smaller libraries, my next step is to obtain the endpoints of these skeletons and accurately connect them. As many Python libraries only cover certain aspects of this task, I am unable to find a single library that provides the desired results. However, changing libraries poses the problem of inconsistent data formats. With limited time left until the final deadline, I cannot afford to spend more time trying each library one by one.
I sincerely hope that you can provide me with some assistance, even if it is just a little guidance or inspiration. To be honest, GitHub is one of the best communities I have come across, which is why I dare to ask you directly. Thank you for taking the time to read this lengthy passage. I truly appreciate your help.
The text was updated successfully, but these errors were encountered: