Summary: bias produced for incomplete publishing of all trials/ data
Example: imagine you're reporting on North Korea's Football Team and you only report the games a team wins. The audience will conclude the team is doing far better than its reality. In research, a common example is whilst performing a meta analysis, if only positive findings are published and neutral findings are not, reviewers will conclude a drug is more effective than its reality. This is an issue because, for publishers, positive findings are more exciting to publish!
Summary: bias producing from comparing a variable across two different groups
Example: imagine you and your colleague are required to do a ward round on two wards. Unfortunately, your colleague picks the ward with the easier patients and he finishes quicker than you. When he celebrates, you explain this is a case of selection bias as his patients were easier and suggest if a fair comparison can be made, then the two patients groups should be comparable, e.g. by blind allocation.
Definitions
Null hypothesis example: a drug has no effect on a disease.
Type 1 Error: incorrectly accepting the null hypothesis, i.e. stating a drug has an effect when there is no effect.
Type 2 Error: incorrectly rejecting the null hypothesis, i.e. stating a drug has no effect when there an effect.
Reducing Errors
Type 1 errors reduced by raising your threshold for significance. This is normally 95%, however moving to 99% would mean you require more evidence to state the drug has an effect and so the type 1 error rate would be 1%.
Type 2 errors reduced by increasing the sample size. This allows a study to detect smaller drug effects by increasing its power.