Shader hardware nowadays is so incredibly flexible and general-purpose that asking “what can and can’t be done with shaders” is like asking “what can and can’t be done with a programming language?”
There are only a few restrictions that seemingly get worked out with any new graphics card generation.
One is: Shaders still can’t use recursive calls (however, I might be wrong on this, but I think this restriction is still up).
Another is: Shaders cannot alter the primitive type/topology of the vertex data fed to them. If you render GL_TRIANGLES then this is what the primitive assembly and rasterization stage will process. And if you have a geometry shader then also that can only output a static primitive type.
Other than that, shaders can nowadays read and write all kinds of memory.
Nowadays, just the question remains: “what is a shader stage ought to do, typically?”
Vertex shaders are ought to transform vertices and compute some interpolated stuff for the fragment shader to consume.
While fragment shaders “usually” output one or many color channels on some framebuffer backed by a texture or a renderbuffer or by the backbuffer of the GL context/window.
This kind of programming model is well suited to implement one thing that shaders are particularly good and invented for: an own lighting and shading model.
Usually, with the fixed-function pipeline in OpenGL you are limited to Gouraud shading.
With shaders you can do Phong or Blinn-Phong or whatever funky local shading model you can think of.
But any of those shader stages and any of the other unmentioned shader stages, can read from and write to anything nowadays. This makes shaders really like a general purpose language running on a platform, which is the graphics card, with limited I/O support (i.e. no sockets, no file system) and a certain programming model (inherently multithreaded).
All this flexibility then lead to “OpenGL compute shaders,” which nowadays also are not that much different from other shader stages, except they do not get fed any data with a draw call, like the other shader stages, but the client has to provide data for it herself by specifying buffer objects, textures and images to read from (and/or write to).
So, shaders now can do anything. But not everything you can do with shaders is also the most performant way to do it. But this entirely depends on what it is in the end that you want to achieve. First, have a clear vision of the graphics design that you are after and then try to find the most performant way to realize that, either with or without shaders.
But that however necessitates that you know what a shader can do.
And that only comes with alot of practice.
So: Yes, do learn shaders. 