We look at observational data from ground stations, satellites, and computer models and combine it with our knowledge of meteorology/climate and our past experiences with similar weather systems to create a probability of precipitation.
A 10% chance of rain means that 10 times out of 100 with this weather pattern, we can expect at least .01" at a given location.
Likewise, a 90% chance of rain would mean that 90 times out of 100 with this weather pattern, we can expect at least .01" at a given location.
Similarly, the Storm Prediction Center issues severe threat probabilities. You can see things like 5% chance of tornado or 15% chance of severe hail.
This simply means 5 times out of 100 in this scenario we can expect a tornado. Or 15 times out of 100 you can expect severe hail with thunderstorms.
By definition, there is no difference in the amount of rain forecast by a 10% chance or a 90% chance. Instead, that information is defined elsewhere, typically by a Quantitative Precipitation Forecast (QPF).
Meteorologists are still struggling with the best ways to inform the public about the differences between high chance, low QPF events versus low chance, high QPF events, and everything else in between.
Meteorologists are still human and have their own wet or dry biases that can hedge the chance of precipitation you see. Recently, forecasts are relying more on bias-correction techniques and statistical models to remove the human bias from precipitation forecasts.
Sometimes a forecaster will try to incorporate unusual events into the equation. (Chernobyl comes to mind.) Then the argument might be, historically the chances of rain were X percent, but we're bumping up the forecast to y percent, because of all the atomic particles released by Chernobyl.