Convolutional deep learning is commonly and frequently used nowadays for high data throughput in image analysis and computer vision applications. The size of such models depends on the number of hyperparameters and the precision required for each hyperparameter. The training of these models requires enormous computational power for images of high dimensionality, resulting in trained models of large sizes. To solve this challenge, several approaches have been used, including quantization and pruning. These techniques have been proven to be effective in reducing the size of the models at the expense of lower accuracy, while in order to ensure higher performance, several hyperparameters need to be optimized. This research effort attempts to evaluate the effect of different combinations of classical and hybrid optimizer-loss functions with the U-Net convolutional deep learning model regarding their segmentation accuracy in cell imaging, while implementing both quantization and pruning. Different metrics are used to evaluate the training and testing process, such as precision, recall, F1 score, dice coefficients, Jaccard index, etc. It is found that the hybrid loss function combination of binary cross-entropy with Jaccard loss functions and RMSprop as the best optimizer results in the highest segmentation accuracy. These combinations of optimizers and loss functions are implemented with post-training quantization and quantization-aware training, and in both cases, the accuracy metrics were not compromised significantly, as they resulted in a decrease of less than 1%. Pruning, as a process to reduce the computational complexity and the size of the model, has proven to be effective by also providing the highest performance among other methods.