Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to make this work for custom videos #8

Open
abhinavsagar opened this issue May 1, 2019 · 7 comments
Open

How to make this work for custom videos #8

abhinavsagar opened this issue May 1, 2019 · 7 comments

Comments

@abhinavsagar
Copy link

No description provided.

@pch9520
Copy link

pch9520 commented May 5, 2019

Recently I have a try.
Firstly,I calibrate my camera parameter with the calibrate_camera.py;
secondly,I calibrate meters to pixel of my picture taken by my camera;
Thirdly,I amend the parameters in line_fit.py perspective_transform.py and line_fit_video.py
Now,I can use this code to detect lines in my video but in some moment the detection is not accurate

@StephanieCoding
Copy link

### When I use my own videos to test, this error often occurs. Anyone knows which parameters should I amend?

C:\Users\moon5\Anaconda3\python.exe D:/Documents/Desktop/智能驾驶/advanced_lane_detection-master/advanced_lane_detection-master/line_fit_video.py
Traceback (most recent call last):
File "D:/Documents/Desktop/智能驾驶/advanced_lane_detection-master/advanced_lane_detection-master/line_fit_video.py", line 104, in
annotate_video('video_1.mp4', 'out.mp4')
File "D:/Documents/Desktop/智能驾驶/advanced_lane_detection-master/advanced_lane_detection-master/line_fit_video.py", line 98, in annotate_video
annotated_video = video.fl_image(annotate_image)
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 514, in fl_image
return self.fl(lambda gf, t: image_func(gf(t)), apply_to)
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\Clip.py", line 137, in fl
newclip = self.set_make_frame(lambda t: fun(self.get_frame, t))
File "", line 2, in set_make_frame
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\decorators.py", line 14, in outplace
f(newclip, *a, **k)
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 669, in set_make_frame
self.size = self.get_frame(0).shape[:2][::-1]
File "", line 2, in get_frame
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\decorators.py", line 89, in wrapper
return f(*new_a, **new_kw)
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\Clip.py", line 94, in get_frame
return self.make_frame(t)
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\Clip.py", line 137, in
newclip = self.set_make_frame(lambda t: fun(self.get_frame, t))
File "C:\Users\moon5\Anaconda3\lib\site-packages\moviepy\video\VideoClip.py", line 514, in
return self.fl(lambda gf, t: image_func(gf(t)), apply_to)
File "D:/Documents/Desktop/智能驾驶/advanced_lane_detection-master/advanced_lane_detection-master/line_fit_video.py", line 43, in annotate_image
ret = line_fit(binary_warped)
File "D:\Documents\Desktop\智能驾驶\advanced_lane_detection-master\advanced_lane_detection-master\line_fit.py", line 79, in line_fit
left_fit = np.polyfit(lefty, leftx, 2)
File "C:\Users\moon5\Anaconda3\lib\site-packages\numpy\lib\polynomial.py", line 550, in polyfit
raise TypeError("expected non-empty vector for x")
TypeError: expected non-empty vector for x

@pch9520
Copy link

pch9520 commented Jun 5, 2019

Did you feed your video straightly without amending any parameter?
If so ,it's normal to encounter issues.
@StephanieCoding

@neishka
Copy link

neishka commented Aug 22, 2019

I am facing the same issue, @pch9520 what parameters are you referring to.

@pch9520
Copy link

pch9520 commented Aug 22, 2019

I am facing the same issue, @pch9520 what parameters are you referring to.

If you want to feed your own video,I think the following parameters you should amend:
1.the size of each frame picture;
2.some parameters in perspective_transform.py to fit your scene
src = np.float32(
[[200, 720],
[1100, 720],
[595, 450],
[685, 450]])
dst = np.float32(
[[300, 720],
[980, 720],
[300, 0],
[980, 0]])
@neishka

@Phillweston
Copy link

I am facing the same issue, @pch9520 what parameters are you referring to.

If you want to feed your own video,I think the following parameters you should amend:
1.the size of each frame picture;
2.some parameters in perspective_transform.py to fit your scene
src = np.float32(
[[200, 720],
[1100, 720],
[595, 450],
[685, 450]])
dst = np.float32(
[[300, 720],
[980, 720],
[300, 0],
[980, 0]])
@neishka

What do you mean by the parameters? If I use my own video, how can I adjust the above parameters to get the best recognition effect?

@pch9520
Copy link

pch9520 commented Jul 25, 2020

Hello, @Phillweston
你可以看看这篇解析(网上还有很多其他的):
https://zhuanlan.zhihu.com/p/46146266
如果你要在自己的视频上用,就要先调整联合阈值的参数,把车道线部分过滤出来,然后再调整鸟瞰图参数,设置你的感兴趣区域并将前视图映射到俯视图,就可以了
如果你还要得出具体的车辆曲率和车辆偏离车道中心的距离,还需要修改距离标定参数(横纵向一个像素分别映射到现实世界是多少距离(m),在俯视图上进行标定)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants