Results 1 to 5 of 5

Thread: Assessment in Maritime Job

  1. #1
    Member jobless's Avatar
    Join Date
    Sep 2011
    Galati, Romania

    Talking Assessment in Maritime Job

    Poate vor invata ceva cei din crewing-ul romanesc, din urmatoarele articole, care pentru mine sunt interesante.

    Assessment in Maritime Job Training and Familiarization
    by Murray Goldberg
    Feb 20, 2012, 3:51PM EST

    This is the first in a series of articles discussing current and best-practice assessment methods in maritime job training and familiarization. This first article discusses the limits and purpose of assessment. Subsequent articles will look at assessment reliability and validity, professional judgement, the goals and topics of assessments, and the merits of specific assessment practices in the maritime industry.

    Follow this blog.
    Share this blog post.
    Follow me on Twitter.
    Assessment in Maritime Job Training and Familiarization
    This is the first in a series of articles discussing current and best-practice assessment methods in maritime job training and familiarization. Specifically, I am speaking about the testing vessel operators administer to officers and crew to determine whether they are sufficiently prepared to perform their duties on their assigned vessels safely and efficiently.

    In planning this series of articles, I have reminded myself how much there is to say about assessment. It is a complex and rich topic and I expect to generate a number of articles on the subject. It is my hope that these articles will provide a basic understanding of assessment principles - an understanding that everyone involved in maritime training should have.

    This short introductory article discusses the limits and purpose of assessment, creating a foundation for subsequent articles on assessment. Subsequent articles will look at assessment reliability and validity, professional judgement, the goals and topics of assessments, and the merits of specific assessment practices in the maritime industry. Please click “follow this blog” to receive notification of those upcoming articles.
    Assessment is Primary
    It is often the case that we give a great deal more thought to training than we do to assessment. This is unfortunate because training cannot be successful (or at the very least cannot be shown to be successful) without an objective and comprehensive assessment process. Your training may be excellent at this moment, but without quality assessments you have no way of knowing this for sure, and you won’t have the tools necessary to keep it on track and continuously improve it. We need to realize that assessment is a critical and necessary part of training, not just something we do at the end in order to apply a credential. It is a primary safety and operations tool to:
    • Determine whether a candidate is fit for duty
    • Determine what gaps in knowledge and skills exist for a candidate
    • Provide key performance indicators for your organization to be used as a basis for analysis and continuous improvement.
    Does Anyone Really Understand Assessment?
    Assessment is an activity that very few organizations do well, and fewer still understand well.
    Assessment is Hard
    There is good reason for that. It is not a cut and dried science. Assessment is often based on intuition rather than concrete fact. Assessment tries to peer into the future of an individual and answer the question “does he or she have the knowledge necessary to perform when called upon”? “Can he or she perform this skill”?

    But how can we truthfully say? After all, so much of what we would like to assess is hidden from view inside the head of the candidate. And while the candidate may be able to demonstrate a skill under one set of conditions, what if those conditions change?

    Actually, Assessment is Even Harder
    But as hard as it is to assess someone’s skills and knowledge, true assessment in the maritime industry needs to do more than that. It needs to asses their cognitive abilities as well. Can the candidate assimilate disparate information and synthesize it into a plan of action when presented with unexpected events? There is simply no way to know for sure. As such, it is something that the academic community has been wrestling with for ages, and many consider it to be as much art as it is science.

    Having said this, there is still much we can do to improve the validity and reliability of the assessments we administer on board. A little bit of knowledge and planning can go a long way.
    The Purpose of Assessment
    One of the first things we must realize when designing an assessment program is that, as I alluded to in the paragraphs above, full assessment is an impossibility. You cannot devise an assessment program which will completely assess a candidate’s knowledge or abilities. Instead, at best, assessment is a statistical process - much like an audit, that samples bits of knowledge here, or components of an ability there, and assigns a score which is an extrapolation of the sample taken. If the sample size is very small or the assessment techniques are flawed (or both, as is sometimes the case), then the margin of error is going to be very large rendering the assessment inaccurate much of the time. But even with a reasonable “sample size” and sound techniques, assessments can never be treated as absolute indicators. Some candidates will assess well and perform poorly, while others will assess poorly and perform well. This begs the question “If assessment is flawed, then why do we assess”?

    Having been a university faculty member for 10 years (and one who really dislikes grading exams) I have often asked myself that question. But it turns out there are very good answers. I will list two of them.
    Assessment Provide Data to Inform Decisions
    First is the obvious answer. In the absence of any other indicators about a candidate, an imperfect assessment is usually better than no assessment at all. Some form of assessment is required to obtain an estimate of gaps in knowledge and abilities as well as the prospects for future performance. Because any one assessment is imperfect, we should not treat its results as an absolute indicator of knowledge or competence. Having said that, even an imperfect assessment provides data that, when combined with professional judgement, can be used to make decisions.
    Assessment is Incentive to Learn
    The second answer is, to me, the most important. If nothing else, assessment is incentive to learn. Every candidate knows that successful assessment performance is their key to employment. They are also keenly aware that not everything they need to know or do will be tested. But in the absence of knowing specifically whatwill be tested, they are faced with having to learn as much as they possibly can about all testable knowledge and skills. There is no greater incentive to deep and broad learning. This is an important fact to keep in mind because anything you do to purposely or inadvertently “teach to the test” or make candidates aware of specifically what their assessment will consist of or cover, takes away their incentive to learn as much as they can. The implications of this statement for assessment techniques will be discussed further in subsequent articles on the subject.
    We need to always keep in mind that assessment is a largely imperfect exercise. Knowing this should cause us to place assessment results in perspective, to ensure our assessment techniques create an incentive to learn, and to take a keen interest in other indicators of our mariners’ abilities. Most importantly, we should treat assessment as a tool which informs conclusions, not a conclusion in itself.

    Maritime Training: Can we Trust Our Assessment Techniques?
    Captain Jim Wright, Southwest Alaska Pilots Association (Retired), took the time to write me the other day. He said:

    Having been involved in simulator assessments of harbor pilots, masters and mates, my impression is that objectivity is the necessary ingredient for equitable performance assessment.

    I love his comment. The question is, how can we introduce objectivity into maritime assessments? It’s actually not all that easy. But there is much we can do.

    This is the second in a series of articles discussing current and best-practice assessment methods in maritime job training and familiarization. Specifically, I am speaking about the testing vessel operators administer to officers and crew to determine whether they are sufficiently prepared to perform their duties on their assigned vessels safely and efficiently. The first article in this series on assessment can be found here. Please click here to be informed of future articles.

    This article covers some assessment basics and provides an example of how BC Ferries combines techniques to improve the objectivity of their assessment of candidates. We have all seen candidates who “test well” but perform poorly. We’ve also seen candidates who have trouble performing when being assessed, but we know are mariners who can be trusted. Most of the time, the reason this happens is because our methods of assessment are flawed. Wouldn’t it be great if candidate assessment gave us a better idea of how they will perform as mariners?

    If you are involved with maritime training in any way, this is important, fundamental information to understand. Read on.
    Reliability and Validity
    First - a quick bit of theory. The heart of assessment boils down to two fundamental assessment goals: assessment reliability and validity.
    For an assessment to be reliable, it must yield the same results consistently, time after time, given the same input (in this case, the same knowledge or skill level). If you and I know the same set of information, the test will give us equal scores if it is “reliable”. If the assessment device is a bathroom scale and my weight is unchanging, the scale will show me the same number each time I weigh myself. This is reliability.
    Validity is a measure of whether the assessment correctly measures what it is designed to measure. There are two related components to validity. First - is the test measuring the thing it is supposed to measure? And second, is it yielding a correct measurement?

    For example, some people argue that written exams are sometimes not valid because they test not only knowledge, but also test the candidates writing ability - something it may not have been designed to test. In this case, the assessment is invalid because it is not measuring the right “thing”. As another example, my bathroom scale may be measuring the right “thing” (my weight), but the result it gives may be inaccurate - showing 150 pounds if I am really 151. In this case it is not providing the correct measurement and is therefore not valid.

    After a bit of careful thought, you’ll probably come to the correct realization that reliability is necessary for validity, but you can have reliability without validity. In other words, reliability is a necessary, but not sufficient condition for validity. High quality assessments are valid assessments, and therefore are both valid and reliable. That is your goal.
    How Do We Achieve Reliability and Validity?
    This is the $64,000 question, and as I mentioned above, there is both art and science involved. Having said that, let’s look at some of the basic considerations. To do so we will look at some more specific assessment goals.

    In the maritime industry, and in fact in most industries where safety is key, assessments should embody the following qualitative attributes to the extent practically possible, and to the extent that they do not negate a more important goal. At a minimum, assessments should be:
    • Objective: assessment techniques are objective if the views of the person conducting the assessment do not come into play in the results. For example, multiple-choice tests are completely objective. Having a trainer rate a candidate on a scale of 1 through 10 for steering performance may be highly subjective. Subjective assessments are less reliable because they depend on the examiner and different examiners will produce different results.
    • Standardized: assessments are standardized if all candidates are assessed on the same knowledge and competencies, using the same assessment techniques. Standardized coverage and techniques are necessary components for reliability.
    • Comprehensive: your assessment techniques are comprehensive if they test all required knowledge and competencies. This is also a requirement to achieve reliability.
    • Targeted: assessments are targeted if they test specifically the knowledge and skills required, and no more. This is the first half of validity - ensuring that your techniques are not inadvertently testing something which is not required for safe and efficient performance as a mariner.

    Unfortunately the goals above are sometimes in conflict with one another. We often need to (partially) give up one assessment goal in order to (partially) achieve the other. Therefore, even if one of the goals given above yields unreliable or invalid results, it is not necessarily the case that we will want to abandon it all together.
    Combining Assessment Techniques
    This is a very good argument for combining assessment methods, each of which has different strengths and is designed to achieve different goals. We will discuss this more in a subsequent posting, but for now I’d like to mention a good example of this used by BC Ferries.

    BC Ferries has created a new training and education system called the SEA (Standardized Education and Assessment) program. With the SEA program, BC Ferries recognized that it was necessary to make not only their training more standardized and objective, but also their assessment. The problem, as we mentioned above, was that to achieve strict objectivity (a necessary component of reliability) it would be necessary to adopt an assessment technique such as multiple choice exams. This kind of exam is a great tool, but used alone, it misses much of what we would like to assess. For example, the skills which make up specific job duties, personality traits and communication skills are all difficult to test using this kind of assessment. On the other hand, multiple choice exams are a very good way of objectively testing knowledge.

    Therefore, assessment in the SEA program is multi-modal. It consists of:
    • Dynamically-created, randomized and automatically graded multiple-choice exams (delivered by MarineLMS),
    • An oral “scenario-based” exam where the candidate is given some scenarios and asked for the appropriate action,
    • A set of demonstrative activities where the candidate demonstrates the ability to undertake certain tasks, and
    • A meeting/interview with a superior - usually the master.

    Some of these are more objective assessments than others, though the ones that are less objective can be made more so - more on that later. Together they provide a body of data on which a quality assessment decision can be based.

    The BC Ferries clearance process is an excellent model and I will write about it in more detail in a subsequent article.
    Professional Judgement
    The goal of assessment in the maritime industry is the determination of whether someone is fit for service, and if not, what remedial action could be applied to make them so. Results such as numeric scores are somewhat arbitrary. It is more important that the scores are reliable and that the assessment is targeted (tests what it is supposed to test). If so, you and your organization will develop an intuitive understanding of what you can expect from a candidate who scored a 65% vs. an 85% and use that understanding to guide your actions.

    Likewise, it must be remembered that exams are tools which yield data, not decisions. Professional judgement, informed by the real data generated by valid and reliable assessments, is required to make a final decision. Just as it would be dangerous for anyone, no matter how experienced, to make an assessment decision in the absence of good data, it is just as dangerous to make assessment decisions strictly “by the numbers”.

    You may argue that I’ve just introduced subjectivity into the interpretation of assessment results. This may be paradoxically true, but at some point experienced professional judgement must come into play. Assessments provide data on which this judgement can be based - but one can never rely on numbers alone.
    Thinking about these principles and goals in relation to your current training practices, you are likely to find that you are strong on some, and weak on others. More importantly, after ranking the desirability of the attributes above (in addition to any you care to add), you may find that your current assessment techniques are not aligned with your assessment goals. If this is the case, this should be a call to action to achieve alignment. Sadly, that is not always easy.

    We will talk more about specific assessment techniques and balancing these often conflicting goals in upcoming articles. Please click here if you would like to be informed when subsequent articles come out.

    In the meantime, it’s been a pleasure. Have a great day!

    # # #

  2. #2
    Member jobless's Avatar
    Join Date
    Sep 2011
    Galati, Romania

    Talking Re: Assessment in Maritime Job


    Assessment in the Maritime Industry - What’s More Important: Skills or Knowledge?

    This is a continuation of a series of articles on assessment in the maritime industry (click here if you would like to be notified of upcoming articles). This article looks at one of the most basic aspects of creating effective assessments: the differentiation between skills and knowledge. We need to give this serious thought as a basic element of assessment planning.

    Given the importance of skills (and therefore our focus on them) in the maritime industry, it is not always obvious what role knowledge plays. It is important to clarify the distinction between skills and knowledge not only because this will guide our assessment topics, but also because different assessment techniques are required for testing skills and knowledge. For example, skill assessment requires techniques such as demonstrative exams (“show me how to …”), or simulation-based assessment. Knowledge assessment is very different. In that case, objective exams, including multiple choice tests can be very effective.

    But even before looking at specific techniques for assessing each, it is important to understand the difference between knowledge and skills, how one influences the other, and the relative importance of each in the maritime industry.
    The Need to Assess Knowledge (as well as Skills)
    When most of us think about assessment in the maritime industry, the first thing that comes to mind is skill. Can the officer or crew-member safely perform the skills required of him or her? Can the quartermaster steer the ship? Can the engineer perform the required mechanical maintenance? Because these are the thoughts that come to mind, we tend to organize our assessments around the evaluation of skills. After all, if everyone performs their necessary skills correctly, what more is there?
    The Research
    Let’s see what the experts have to say:

    “A study by the U.S. Coast Guard found many areas where the industry can improve safety and performance … the three largest problems were fatigue, inadequate communication ... and inadequate technical knowledge.”
    Human Error and Marine Safety - U.S. Coast Guard Research & Development Center

    “Knowledge-based mistakes may occur when we have to think our way through a novel situation for which we do not have a procedure or “rule”. … Knowledge-based ... mistakes by crewmembers account for 13% [of maritime accidents].”
    Searching for the Root Causes of Maritime Casualties - Maritime Research Centre, Warsash, Southampton, UK

    You’ll be interested to know that the study from which the second quote was taken notes that “skill-based” mistakes account for 9% of accidents. Fewer than “knowledge-based” mistakes!
    Knowledge - the Foundation of all Skills
    But some argue that it is not necessary to assess knowledge separately. After all, it is the skills that are the end goal. If the mariner can perform any task skillfully - does that not mean that they already possess all required knowledge to perform that skill? And if that is true, once we test the skill, what is the point of testing the knowledge?

    First, as everyone will recognize, it is impossible to fully assess any candidate’s skills. We cannot assess performance in every scenario, contingency and context that a mariner will face during their career. We can and should attempt to cover a broad range of these, but we will always fall short of completeness. How do we assess their ability to perform in novel situations then - ones which we cannot directly test?

    To answer this, look at the second quote, above, from the Maritime Research Center. It says: “Knowledge-based mistakes may occur when we have to think our way through a novel situation for which we do not have a procedure or ‘rule’”. At the heart of this statement is the fact that underlying every skill is a set of knowledge on which that skill is built. We can teach a monkey how to perform a skill under conditions which never change. However, as soon as conditions vary slightly, only a human with knowledge and the ability to reason can make intelligent decisions and continue to perform the skill correctly and safely. The deeper the knowledge, the more readily adaptable the person is to more widely fluctuating conditions.

    This is especially important in light of the ever-increasing complexity of vessel-based systems and the correspondingly complex knowledge required of modern mariners. Understanding “how” to perform some task without understanding at least a little about the systems which underlie the task being performed creates a safety risk.
    • First, an understanding of the underlying systems will help inform the mariner as to the consequences that arise when the task is not performed or is incorrectly performed. Having this knowledge creates a critical incentive to conscientious (and therefore safe) performance.
    • Similarly, having an understanding of the underlying systems will help mariners make intelligent decisions when presented with unexpected equipment operation, unexpected readings, or a novel emergency situation.

    Despite our best efforts, we cannot train and assess for every situation nor is it possible to train and assess motivation into a candidate. Therefore, the next best thing we can do is to train and assess the knowledge which will help motivate mariners to do their job well, and to provide them with the necessary tools to react intelligently when an unexpected situation arises. This is the age of the “knowledge worker” - and the maritime industry has entered this age. We need to prepare them for the job. Safety depends on it.

    A Big Example
    If you are not convinced that knowledge training and assessment (in addition to skill training and assessment) is both:
    1. Important, and
    2. Often not given the weight it deserves,

    then look at this quote from the National Academy of Engineering and the National Research Council entitled “Interim Report on Causes of the Deepwater Horizon Oil Rig Blowout and Ways to Prevent Such Events”:

    “Personnel on the Deepwater Horizon MODU were mostly trained on the job, and this training was supplemented with limited short courses .... While this appears to be consistent with industry standard practice … it is not consistent with other safety-critical industries.

    “The failures and missed indications of hazard were not isolated events … [these] raise questions about the adequacy of operating knowledge on the part of key personnel.”

    It is very interesting to me how they conclude that the training was not consistent with other safety-critical industries. For 10 years I was a faculty member of Computer Science at the University of British Columbia. There, one of the courses I taught was software engineering, with a module covering safety-critical software systems. The techniques taught for systems where software, if it malfunctioned, could put humans at risk, were quite different than for other software systems. A tremendous degree of oversight, evaluation and testing was applied - not only to the software, but also to the processes under which the software was created. Safety-critical activities require a higher order of training and assessment.

    The maritime industry is a safety-critical industry with increasingly complex knowledge requirements. Knowledge, not just skills, must be taught and assessed.
    So - which is more important to assess - knowledge or skills? Clearly both need to be assessed. If you are only assessing skills, you are compromising safety when crew-members encounter unexpected situations - as will always be the case. You are also creating a motivational issue because workers will not fully appreciate the consequences of failure to perform. If you are only assessing knowledge, you are also missing a critical aspect of performance testing. Each requires a different assessment technique - a topic that will be covered in an upcoming post. Click here if you would like to be notified of subsequent articles.

    # # #

    About The Author:
    Murray Goldberg is the founder and President of Marine Learning Systems (, the creator of MarineLMS - the learning management system designed specifically for maritime industry training. Murray began research in eLearning in 1995 as a faculty member of Computer Science at the University of British Columbia. He went on to create WebCT, the world’s first commercially successful LMS for higher education; serving 14 million students in 80 countries. Now, in Marine Learning Systems, Murray is hoping to play a part in advancing the art and science of learning in the maritime industry

  3. #3
    Member jobless's Avatar
    Join Date
    Sep 2011
    Galati, Romania

    Re: Assessment in Maritime Job

    The Importance of Assessing Attitude

    Skills and knowledge are requirements. You can’t be safe and efficient without them. However, a mariner may have all the required skills and knowledge, but still be a very poor, unsafe mariner if his or her attitude or ethics are poor. Does the person care about the job? Do they care about their fellow crewmembers? Do they even care about themselves? When they see a problem, are they the type of person to stop and report it or fix it? Or do they just keep on walking? Do they instill professionalism in others? Or do they breed a lack of professionalism, and therefore poor performance in others?

    Everyone knows who among their fellow workers have poor work attitudes, and in most cases it eventually catches up with them. Even so, there is a strong argument that we should actively assess attitude - providing some measure of it. If we do so, then we can act on real data, and do so much more quickly than we otherwise might be able to. Unprofessional attitudes can be poisonous to safety and the work environment and the more quickly we address them, the better off the organization is likely to be as a whole.

    The problem is, how do you measure it?
    Measuring Attitude

    Many organizations (hopefully most) perform regular performance appraisals. These are typically done by a superior and will often mention attitude and professionalism - at least if an issue is perceived. This is a good start, but it is incomplete and there is more that can be done.
    Psychological Testing
    There is a class of psychological testing, called objective personality testing, that is designed to measure attitude. These can take the form of written tests or, more recently, a technique called "gamification" where the candidate plays a scenario-based game and is required to make decisions based on the scenarios presented. The test answers or game-based reactions are studied by behavioural theory models which try to translate the reactions into measures of attitude, teamwork, work ethic, etc. You may be rolling your eyes already at the idea that we can draw meaningful conclusions from such tests, but keep an open mind for at least a few more minutes.

    These tests usually have “validity” questions built in - questions which are there only to determine whether the test taker answered truthfully. Many people feel strongly that this kind of testing is very valuable and can, more often than not, produce useful and reliable information about the candidate's attitude. Others argue that these tests require a high level of expertise to interpret - even though they are intended to be objective.

    I am not even close to being a behavioral theorist, so I hesitate to offer an opinion on this kind of testing. Having said that, I do question the reliability of this kind of testing. I can't help but think that a semi-intelligent person would be able to adjust his or her answers to those which he or she believes are the desirable answers. After all, even if a person has no morals, he or she probably knows what good morals look like and knows that society values them. A really well constructed test may be hard to "fake", but impossible? I am not sure.

    Even if we do consider this kind of testing relatively reliable, the test is not telling us how the attitude manifests itself in terms of performance - only that there is an attitude issue. Therefore - what do we do with the results? It may be reasonable to consider the results if we are making a hiring decision. But what about test results for an existing mariner? Would a poor test result constitute reasonable grounds for remedial training or even possibly dismissal? Possibly the former, but likely not the latter.

    Instead, at least for existing employees, it may be better to directly measure the attitude's affect on performance. We can do this using 360-degree evaluations.
    360-Degree Evaluations
    A 360 degree evaluation gets its name because of who performs the evaluation. Here, 360 degrees means "on all sides" of the person being evaluated. Specifically, the candidate is evaluated by his or her superiors, reports and peers. Some may consider such an evaluation at odds with the hierarchical reporting structure of the maritime industry. We are accustomed to being evaluated by our superiors. But what about our peers and subordinates? I suspect for most of us, if we think about it, we will come to the conclusion that knowledge of how our peers and subordinates view our performance can make us better at our job. If so, read on.

    A 360 evaluation can assess a variety of attributes, but they are generally geared toward subjective attributes such as attitude and professionalism. The evaluation typically is based on a series of questions that evaluators are asked about the person being evaluated. The evaluators, normally numbering between eight and twelve, are usually a combination of some suggested by the person being evaluated and some chosen by the supervisor. All must have worked with the candidate sufficiently that they are able to render a meaningful opinion. In fact, studies show that the longer the assessors know the person being evaluated, the more reliable the assessments are. Not surprising.

    The questions in the assessment can be direct and to the point, as long as they are the kinds of questions that the evaluators would have a basis for answering. It should be clear to the assessors that all questions should be answered based on their direct observations when working with the person being assessed. Likewise, there should always be an opportunity to answer "insufficient knowledge to answer this question". Finally, if possible, each question should be answered on a scale (6: Agree completely, 5: Somewhat Agree, 4: Agree, 3: Disagree, 2: Somewhat Disagree, 1: Disagree completely) and there should be the opportunity for the evaluator to add a comment to any answer. Finally, feedback should be provided anonymously in order to encourage honesty - especially from subordinates.

    To assess attitude, you might ask questions such as:
    This person presents themselves, by their appearance and actions, professionally on the job.
    This person inspires conscientious performance in others.
    I trust this person to uphold the highest safety standards.
    This person engages in all activities with a positive attitude, enthusiasm, and a smile.
    This person always wears the required safety gear.

    The examples above focus on professionalism and performance, but you can also assess leadership, teamwork, or any number of other soft skills. The only requirements are that questions must relate directly to the job and be relevant to key company objectives.

    Once the evaluations have been received, the results are assembled into a report which can be used as a basis for decisions.

    The task of creating the evaluations, distributing them to the evaluators, collecting the completed evaluations, and assembling the results into a report is a major effort and was a significant impediment to the use of 360 degree evaluations in the past. Now, however, technology has come to the rescue and there are many good systems which automate all the hard work.

    There is a lot more to the correct design and delivery of 360 degree evaluations - this is just a start. It should be noted that although, with planning, it is easy to do a great job, it is also easy to do a bad job at creating and delivering 360 degree assessments. Doing so can cause real damage to morale that takes time to repair. But, done well, it can be not only a valuable information source to the company, but also to the individual in helping them grow in their career. So the message is do your homework really well before embarking on this.

    The Benefits of 360 Degree Assessments

    360 degree assessments tend to be quite effective and useful when done well. I have had experience using them as both the subject and the evaluator. Many times in fact.

    The first benefit is derived from the fact that there are many evaluators, not only one. Because of this, it is easy to identify trends in the reports and have a greater confidence as to their validity. If there is a problem, it has likely been noticed by more than one person. Similarly, these assessments are less prone to being skewed by one outlying evaluation. As long as multiple people are assessing from the same "viewpoint" (peer, superior or inferior), then we can usually discount evaluations which are not consistent with a large majority.

    Another benefit is employee can sometimes presents themselves differently to, for example, their superiors than they do to their peers or inferiors. Therefore, these evaluations are far better at identifying one-sided issues (such as poor leadership or expressed lack of respect for authority) than an evaluation done solely by a superior.

    Finally, 360 degree evaluations also provide a candid source of feedback that the person being evaluated does not often have access to. It can be a real eye-opener to see how your work is viewed by others. If the process is constructive and done with sensitivity, this can help the person being evaluated identify issues they were not aware of - but would very much like to work on.

  4. #4

    Re: Assessment in Maritime Job

    Sunt convins ca ai reusit cu succes sa induci acel 'awareness' motivational in subconstientul patronilor de firme de plasament. Cred ca se zvarcolesc in cearsaf,saracii, la gandul ca au angajat pana acum oameni fara sa cunoasca 'The Benefits of 360 Degree Assessments' .Pai de acum nu o sa mai trimita treiari fara pic de experienta direct pe functie, fitteri ca timonieri si padurari in Sahara...E clar ca treaba e nasoala daca 'A study by the U.S. Coast Guard found many areas where the industry can improve safety and performance'. Pai daca si etalonul cunoasterii in marinarie zice ca e loc de mai bine...
    Mai exista o posibilitate : ca eu sa fiu singurul amanet care a avut rabdarea , timpul necesar si proasta inspiratie sa citeasca toata adunatura asta de generalitati si banalitati tipic americane.
    Dar sa fim optimisti....Trebuie sa mai fie macar inca unul ca mine
    Dar!!! Se mai intrevede o situatie : Ca tu sa fi dat copy/paste aici de curiozitate daca chiar citeste cineva. In acest caz deloc probabil..respect

  5. #5
    Senior Member Andy_Fox's Avatar
    Join Date
    Mar 2009
    At sea

    Re: Assessment in Maritime Job

    Poate că era mai simplu să dai direct link-ul dacât să umpli trei posturi cu toată polologhia asta. Oricum, sunt de acord cu Octav, numeni nu are răbdare să citească așa ceva, ca să nu mai zic că nimeni (din crewing-ul românesc) nu va aplica așa ceva vreodată. Și terminologia și structura ”studiului” e tipică cărților de genul ”Cum să te îmbogățești în 30 de zile”, cărți care n-au îmbogățit pe nimeni până acum, nici măcar pe autor...



Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts