2018년 11월 26일 월요일

Videos playing in slow mo - GoPro 120fps

일단 60fps로 찍던가.. 이미 찍은건 아래의 방법으로 해결

1. Go to the Photos app on your Ipad
2. Find the video with the issue and select
3. Press Edit on the top right hand of the page
4. Locate the timeline at the bottom of the video...it kind of looks like this
sample: lllllll l l l l l l l l l l l l l  l lllllllll
5. Simply slide your finger from left to right until the bars are all evenly spaced. I.e. liked this: lllllllllllllllllllllllllllllllll|lllllllllllll

https://community.gopro.com/t5/GoPro-Apps-for-Mobile/Videos-playing-in-slow-mo/m-p/109614/highlight/true#M4269

2018년 11월 21일 수요일

Varistor


Varistor란?
전자 제품 및 기기들을 구성하고 있는 소자들을 surge 전압으로부터 보호해주는 소자
https://goo.gl/uxHSXJ


Bourns Inc. MOV-07D201KTR
https://www.digikey.kr/product-detail/ko/bourns-inc/MOV-07D201KTR/MOV-07D201KTRCT-ND/2408228


A typical varistor protecting a transistor
https://www.electronicshub.org/varistor/




2018년 9월 24일 월요일

Sequential vs. Functional

Keras Models: Sequential vs. Functional

There are two ways to build Keras models: sequential and functional.

The sequential API allows you to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs or outputs.

Alternatively, the functional API allows you to create models that have a lot more flexibility as you can easily define models where layers connect to more than just the previous and next layers. 
Reference:
https://jovianlin.io/keras-models-sequential-vs-functional/

2018년 9월 17일 월요일

To change the default setting to display line numbers in vi/vim:

To change the default setting to display line numbers in vi/vim:

vi ~/.vimrc
then add the following line to the file:

set number
Either we can source ~/.vimrc or save and quit by :wq, now vim session will have numbering

https://stackoverflow.com/a/31105979/8608003

2018년 8월 19일 일요일

Teacher forcing

Teacher forcing is a strategy for training recurrent neural networks that uses model output from a prior time step as an input.

... the decoder learns to generate targets[t+1...] given targets[...t]conditioned on the input sequence.

https://machinelearningmastery.com/teacher-forcing-for-recurrent-neural-networks/

2018년 8월 15일 수요일

1/e





Understanding exponentially weighted averages



from Understanding Exponentially Weighted Averages (C2W2L04)
 https://www.youtube.com/watch?v=NxTFlzBjS-4

RMSprop



So that's RMSprop, and similar to momentum, has the effects of damping out the oscillations in gradient descent, in mini-batch gradient descent. And allowing you to maybe use a larger learning rate alpha. And certainly speeding up the learning speed of your algorithm.

from Andrew Ng's lecture
https://www.youtube.com/watch?v=_e-LFe_igno

2018년 8월 10일 금요일

2018년 4월 23일 월요일

1x1 Convolutions

1x1 Convolutions from Udacity